Windows Bare Metal Servers on NSX-T overlay Networks

Featured

In this post, I will configure Windows 2016/2019 bare metal server as an transport node in NSX-T and then also will configure a NSX-T overlay segment on a Windows 2016/2019 server bare metal server, which allow VM and bare metal server on the same network to communicate.

To use NSX-T Data Center on a windows physical server (Bare Metal server), let’s first understand few terminologies which we will use in this post.

  • Application – represents the actual application running on the physical server server, such as a web server or a data base server.
  • Application Interface – represents the network interface card (NIC) which the application uses for sending and receiving traffic. One application interface per physical server server is supported.
  • Management Interface – represents the NIC which manages the physical server server.
  • VIF – the peer of the application interface which is attached to the logical switch. This is similar to a VM vNIC.

Now lets configure our windows server to operate in NSX overlay environment:

Enable WinRM service on Windows 2019

First of all we need to enable Windows Remote Management (WinRM) on Windows Server 2016/2019 to allow the Windows server to interoperate with third-party software and hardware. To enable the WinRM service with a self-signed certificate, Run:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
PS$ wget -o ConfigureWinRMService.ps1 https://raw.github.com/vmware/bare-metal-server-integration-with-nsxt/blob/master/bms-ansible-nsx/windows/ConfigureWinRMService.ps1
PS$ powershell.exe -ExecutionPolicy ByPass -File ConfigureWinRMService.ps1.

Run the following command to verify the configuration of WinRM listeners:

winrm e winrm/config/listener

NOTE- For production bare metal servers, please enable winrm with HTTPS for security reasons and procedure is explained here

Installing NSX-T Kernel Module on Windows 2019 Server

Now let’s proceed with installing the NSX kernel module on the Windows Server 2016/2019 bare metal server. Make sure to download NSX kernel module for Windows server 2016/2019 with the same version of your NSX-T instance from VMware downloads

Start the installation of the NSX kernel module by executing the .exe file on your Windows BM server.

Configure the bare metal server as a transport node in NSX-T

Before we add the bare metal server as a transport node, we must need to create a new uplink profile in NSX-T that we are going to use for the bare metal servers. An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

In my Lab the windows 2016/2019 bare metal server will have two network adapters, one NIC in the management VLAN and the other one is on a TEP VLAN (VLAN160).

once uplink profile is configured, We can now proceed with adding the Windows 2016/2019 bare metal server as transport node into NSX-T. In the NSX-T web page go to System –> Fabric –> Nodes and click on +ADD

Enter Management Interface IP address of your windows bare metal host and its credential, and do not change the Installation Location, it will validate your credentials against windows BM and then will allow you to move next

On the next screen, choose virtual switch name or leave it default, select overlay transport zone as we are connecting this to overlay and select uplink profile and management uplink interface.

on the next screen, configure IP address, Subnet and GW for TEP interface, this could be using specifying static IP list or choosing an IP pool which belongs to TEP VLAN.

Click on Next , This will start preparing your Winodws BM for NSX-T

​Once preparation/config completed, we can attach segment from above screen or we can Continue Later, lets click on “Continue Later” for now, we will add in different step.

Now if you see your windows BM in NSX-T console, it is ready for NSX-T and asking us to attach an overlay segment.

Attach Overlay Segement

Select host in the “Host Transport Nodes” section and click on “Action” and then click on “Manage Segment” which takes you to same screen that SELECT SEGMENT would have during original deployment

now select which segment the Application Interface for the Physical Server will reside on and click on “ADD SEGMENT PORT”

​Add Segment Port and Attach Application Interface

On the add Segment port screen:

Choose Assign New IP (This will be your application IP on Windows BM) – > NSX Interface Name (Default is “nsx-eth”) – This is Application Interface Name on Physical Server

Default Gateway –> Provide – T0 or T1 Gateway address

IP Assignment – i am doing Static, but you can also do DHCP or IP Pool for application interface.

Save –Once save is pressed, configuration is sent to Physical Server and you can see on physical server that Application IP has been assigned to an virtual interface.

Now you can see host config in NSX-T Manager console, everything is green and showing up.

Now if you see you can reach to this Bare Metal from a VM with IP address “172.16.20.101” which is on the same segment as this physical server without doing bridging.

if you click on windows server , you can see other information and specifically the “Geneve Tunnels” between ESXi host on which VM is running and Windows BM host on which your application is running.

This completes the configuration, this gives customers/partners an opportunity to run VM and Bare Metal servers on same network and security (like micro-segmentation) can be managed from single console that is NSX-T console. i hope this helps. please share your feedback 🙂

Advertisement

Cloud Native Runtimes for Tanzu

Featured

Dynamic Infrastructure

This is an IT concept whereby underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. Modern Cloud Infrastrastructure built on VM and Containers requires automated:

  • Provisioning, Orchestration, Scheduling
  • Service Configuration, Discovery and Registry
  • Network Automation, Segmentation, Traffic Shaping and Observability

What is Cloud Native Runtimes for Tanzu ?

Cloud Native Runtimes for VMware Tanzu is a Kubernetes-based platform to deploy and manage modern Serverless workloads. Cloud Native Runtimes for Tanzu is based on Knative, and runs on a single Kubernetes cluster. Cloud Native Runtime automates all the aspects of dynamic Infrastructure requirements.

Serverless ≠ FaaS

ServerlessFaaS
Multi-Threaded (Server)Cloud Provider Specific
Cloud Provider AgnosticSingle Threaded Functions
Long lived (days)Shortly Lived (minutes)
offer more flexibilityManaging a large number of functions can be tricky

Cloud Native Runtime Installation

Command line Tools Required For Cloud Native Runtime of Tanzu

The following command line tools are required to be downloaded and installed on a client workstation from which you will connect and manage Tanzu Kubernetes cluster and Tanzu Serverless.

kubectl (Version 1.18 or newer)

  • Using a browser, navigate to the Kubernetes CLI Tools (available in vCenter Namespace) download URL for your environment.
  • Select the operating system and download the vsphere-plugin.zip file.
  • Extract the contents of the ZIP file to a working directory.The vsphere-plugin.zip package contains two executable files: kubectl and vSphere Plugin for kubectl. kubectl is the standard Kubernetes CLI. kubectl-vsphere is the vSphere Plugin for kubectl to help you authenticate with the Supervisor Cluster and Tanzu Kubernetes clusters using your vCenter Single Sign-On credentials.
  • Add the location of both executables to your system’s PATH variable.

kapp (Version 0.34.0 or newer)

kapp is a lightweight application-centric tool for deploying resources on Kubernetes. Being both explicit and application-centric it provides an easier way to deploy and view all resources created together regardless of what namespace they’re in. Download and Install as below:

ytt (Version 0.30.0 or newer)

ytt is a templating tool that understands YAML structure. Download, Rename and Install as below:

kbld (Version 0.28.0 or newer)

Orchestrates image builds (delegates to tools like Docker, pack, kubectl-buildkit) and registry pushes, works with local Docker daemon and remote registries, for development and production cases

kn

The Knative client kn is your door to the Knative world. It allows you to create Knative resources interactively from the command line or from within scripts. Download, Rename and Install as below:

Download Cloud Native Runtimes for Tanzu (Beta)

To install Cloud Native Runtimes for Tanzu, you must first download the installation package from VMware Tanzu Network:

  1. Log into VMware Tanzu Network.
  2. Navigate to the Cloud Native Runtimes for Tanzu release page.
  3. Download the serverless.tgz archive and release.lock
  4. Create a directory named tanzu-serverless.
  5. Extract the contents of serverless.tgz into your tanzu-serverless directory:
#tar xvf serverless.tar.gz

Install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid Cluster

For this installation i am using a TKG cluster deployed on vSphere7 with Tanzu.To install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid: First target the cluster you want to use and verify that you are targeting the correct Kubernetes cluster by running:

#kubectl cluster-info

Run the installation script from the tanzu-serverless directory and wait for progress to get over

#./bin/install-serverless.sh

During my installation, I faced couple of issues like this..

i just rerun the installation, which automatically fixed these issues..

Verify Installation

To verify that your serving installation was successful, create an example Knative service. For information about Knative example services, see Hello World – Go in the Knative documentation. let’s deploy a sample web application using the kn cli. Run:

#kn service create hello --image gcr.io/knative-samples/helloworld-go - default

Take above external URL and either add Contour IP with host name in local hosts file or add an DNS entry and browse and if everything is done correctly your first application is running sucessfully.

You can list and describe the service by running command:

#kn service list -A
#kn service describe hello -n default

It looks like everything is up and ready as we configured it. Some other things you can do with the Knative CLI are to describe and list the routes with the app:

#kn route describe hello -n default

Create your own app

This demo used an existing Knative example, why not make our own app from an image, let do it using below yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld
  namespace: default
spec:
 template:
  spec:
   containers:
     - image: gcr.io/knative-samples/helloworld-go
       ports:
             - containerPort: 8080
       env:
        - name: TARGET
          value: "This is my app"

Save this to k2.yaml or something which you like, now lets deploy this new service using the kubectl apply command:

#kubectl apply -f k2.yaml

Next, we can list service and describe new deployment, as per the name provided in the YAML file:

and now finally browse the URL by going to http://helloworld.default.example.com (you would need to add entry in DNS or hosts files)

This proves your application is running successfully, Cloud Native Runtimes for Tanzu is a great way for developers to move quickly go on serverless development with networking, autoscaling (even to zero), and revision tracking etc that allow users to see changes in apps immediately. GO ahead and try this in your Lab and once GA in production.

Quick Tip – Delete Stale Entries on Cloud Director CSE

Featured

Container Service Extension (CSE) is a VMware vCloud Director (VCD) extension that helps tenants create and work with Kubernetes clusters.CSE brings Kubernetes as a Service to VCD, by creating customized VM templates (Kubernetes templates) and enabling tenant users to deploy fully functional Kubernetes clusters as self-contained vApps.

Due to any reason, if tenant’s cluster creation stuck and it continue to show “CREATE:IN_PROGRESS” or “Creating” for many hours, it means that the cluster creation has failed for unknown reason, and the representing defined entity has not transitioned to the ERROR state .

Solution

To fix this, provider admin need to get in to API and delete these stale entries, there are very few simple steps to clean those stale entries.

First – Let’s get the “X-VMWARE-VCLOUD-ACCESS-TOKEN” for API calls by calling below API call:

  • https://<vcd url>/cloudapi/1.0.0/sessions/provider
  • Authentication Type: Basic
  • Username/password – <adminid@system>/<password>

Above API call will return “X-VMWARE-VCLOUD-ACCESS-TOKEN”, inside header section of response window. copy this token and use as “Bearer” token in subsequent API calls.

Second – we need to get the “Cluster ID” of the stale cluster which we want to delete, and to get “Cluster ID” – Go in to Cloud Director Kubernetes Container Extension and click on cluster which is stuck and get Cluster IP in URN Format.

Third (Optional) – Get the cluster details using below API call and using authentication using Bearer token , which we first step:

Get  https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Fourth – Delete the stale cluster using below API call by providing “ClusterID“, which we captured in second step and using authenticate type a “Bearer Token

Delete https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Above API call should respond with “204 No Content”, it means API call has been executed sucessfully.

Now if you login to Cloud Director “Kubernetes Container Cluster” extension, above API call must have deleted the stale/stuck cluster entry

Now you can go to Cloud Director vAPP section and see if any vAPP/VM is running for that cluster , shutdown that VM and Delete it from Cloud Director. simple three API calls to complete the process.

VMware Cloud Director Assignable Storage Policies to Entity Types

Featured

Service providers can use storage policies in VMware Cloud Director to create a tiered storage offering like: Gold, Silver and Bronze or even offer dedicated storage to tenants. With the enhancement of storage policies to support VMware Cloud Director entities, Now providers has the flexibility to control how tenant use the storage policies. Providers can have not only tiered storage, but isolated storage for running VMs, containers, edge gateways, Catalog and so on.
A common use case that this Cloud Director 10.2.2 update addresses is the need for shared storage across clusters or offering lower cost storage for non-running workloads. For example, instead of having a storage policy with all VMware Cloud Director entities, you can break your storage policy into a “Workload Storage Policy” for all your running VMs and containers, and dedicate a “Catalog Storage Policy” for longer term storage. A slower or low cost NFS option can back the “Catalog Storage Policy”, while the “Workload Storage Policy”  can run on vSAN.

Starting with VMware Cloud Director 10.2.2, if Provider do not want a provider VDC storage policy to support certain types of VMware Cloud Director entities, you can edit and limit the list of entities associated with the policy, here is the list of supported entity types:

  • Virtual Machines – Used for VMs and vApps and their disks
  • VApp/VM Templates – Used for vApp Templates
  • Catalog Media – Used for Media inside catalogs
  • Named Disks – Used for Named disks
  • TKC – Used for TKG Clusters
  • Edge Gateways – Used for Edge Gateways

You can limit the entity types that a storage policy supports to one or more types from this list. When you create an entity, only the storage policies that support its type are available.

Use Case – Catalog-Only Storage Policy

There are many use cases of using assignable storage policies, i am demonstrating this one because many of providers has asked this feature in my discussions. so for this use case we will take entity type – Media, VApp Template

Adding Media, VApp Template entity types to a storage policy marks a storage policy as being able to be used with VDC catalogs.These entity types can be added at the PVDC storage policy layer. Storage policies that are associated with datastores that are intended for catalog-only storage can be marked with these entity types, to force only catalog-related items in to catalog only storage datastore.

When added: VCD users will be able to use this storage policy with Media/Templates. In other words, tenants will see this storage policy as an option when pre-provisioning their catalogs on a specific storage policy.

  • In Cloud Director provider portal, select Resources and click Cloud Resources.
  • select Provider VDCs, and click the name of the target provider virtual data center.
  • Under Policies, select Storage
  • Click the radio button next to the target storage policy, and click Edit Supported Types.
  • From the Supports Entity Types drop-down menu, select Select Specific Entities.
  • Select the entities that you want the storage policy to support, and click Save.

Validation

Lets validate this functionality by login as a tenant and go into “Storage Policies” settings and here we can see this Org VDC has two storage Policy assigned by provider.

Now lets deploy a virtual machine in the same org VDC and you can see that policy “WCP” which was marked as catalog only is not available for VM provisioning.

In the same Org VDC lets create a new “Catalog”, here you can see both the policies are visible, one exclusively for “Catalog” and another one which is allowed for all entity types.

Policy Removal: VCD users will not be able to use this storage policy with Media/Templates but whatever is there will continue to be there.

This addition to Cloud Director gives opportunity to providers to manage storage based on entity type , This is the one use case similarly one particular storage can be used for Edge Placement , another one could be used to spin up production grade Tanzu Kubernetes Cluster while default storage can be used by CSE native Kubernetes cluster for development container workloads. This opens up new Monetization opportunities for provider, upgrade your Cloud Director environment and start monetizing.

This Post is also available as Podcast

Auto Scale Applications with VMware Cloud Director

Featured

Starting with VMware Cloud Director 10.2.2, Tenants can auto scale applications depending on the current CPU and memory utilization. Depending on predefined criteria for the CPU and memory use, VMware Cloud Director can automatically scale up or down the number of VMs in a selected scale group.

Cloud Director Scale Groups are a new top level object that tenants can use to implement automated horizontal scale-in and scale-out events on a group of workloads. You can configure auto scale groups with:

  • A source vApp template
  • A load balancer network
  • A set of rules for growing or shrinking the group based on the CPU and memory use

VMware Cloud Director automatically spins up or shuts down VMs in a scaling group based on above three settings.This blog post will help you to enable Scale Goups in Cloud Director and also we will configure Scale Groups.

Configure Auto Scale

Login to Cloud Director 10.2.2 cell with admin/root credential and eanble metric data collection and publishing by setting up the metrics collection in a Cassandra database or collect metrics without metrics data persistence. in this post we are going to configure without metrics data persistence, to collect metrics data without data persistence, run the following commands:

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.collect.only -v true 

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.publishing.enabled -v true

Now the second step is to create a file named “metrics.groovy” file in the /tmp folder of cloud director appliance with the following contents:

configuration {
    metric("cpu.ready.summation") {
        currentInterval=20
        historicInterval=20
        entity="VM"
        instance=""
        minReportingInterval=300
        aggregator="AVERAGE"
    }
}

Change the file permission appropriatly and import the file using below command:

$VCLOUD_HOME/bin/cell-management-tool configure-metrics --metrics-config /tmp/metrics.groovy

Lets enable auto scalling plugin by running below commands on Cloud Director Cell:

$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set enabled=true
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set username=<username> (should having admin priveldge account)
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --encrypt --set password=<password> ( Password for the above account)

If you see in the Cloud Director provider console, under customize portal , there will be a plugin which provider can enable to tenants who needs auto scale functionality for the applications or can be made available to all tenants.

Since Auto Scale released with VMware Cloud Director 10.2.2, which allows service providers to grant tenant rights to create scale groups.Under Tenant Access Control, select Rights Bundles and then select the “vmware:scalegroup Entitlement” bundle, and click Publish.

Also ensure provider Add the necessary VMWARE:SCALEGROUP rights to the tenant roles that want to use scale groups.

Tenant Self Serive

On the Tenant portal after login, select Applications and select the Scale Groups tab and Click New Scale Group.

Select an organization VDC in which tenant want to create the scale group.

Or Tenant can also access scale groups from a selected organization virtual data center (VDC) by going in to specific organization VDC

  • Enter a name and a description of the new scale group.
  • Select the minimum and maximum number of VMs to which you want the group to scale and click Next.

Select a VM template for the VMs in the scale group and a storage policy, and click Next. Template needs to be pre-populated in catalog by tenant or can be by provider and published for tenants.

Next step is to select a network for the scale group.If Tenants oVDC is backed by NSX-T Data Center and NSX ALB (AVI) has been published as a load balancer by provider, then tenant can choose NSX ALB load Balancer edge on which load balacing has been enabled and Server pool has been setup by tenant before enabling Scale Groups.

If tenant want to manage the load balancer on your own or if there is no need for a load balancer, thenn select I have a fully set-up network, Auto Scale will automatically adds VM to this network.

VMware Cloud Director starts the initial expansion of the scale group to reach the minimum number of VMs, besically it will start creating a VM from template that tenant selected while creating scale group and it will continue to spin up to minimum number VM spacified in the scale group section.

Add an Auto Scaling Rule

  1. Click Add Rule.
  2. Enter a name for the rule.
  3. Select whether the scale group must expand or shrink when the rule takes effect.
  4. Select the number of VMs by which you want the group to expand or shrink when the rule takes effect.
  5. Enter a cooldown period in minutes after each auto scale in the group.The conditions cannot trigger another scaling until the cooldown period expires and cooldown period resets when any of the rules of the scale group takes effect.
  6. Add a condition that triggers the rule.The duration period is the time for which the condition must be valid to trigger the rule. To trigger the rule, all conditions must be met.
  7. To add another condition, click Add Condition. tenant can add multiple conditions.
  8. Click Add.

From the details view of a scale group, when you select Monitor, you can see all tasks related to this scale group. For example, you can see the time of creation of the scale group, all growing or shrinking tasks for the group, the rules that initiated the tasks, in side Virtual Machine section which VM has been created at what time and what is the IP address of the VM etc…

here is the section showing scale takes which trigered and its status.

Virtual Machine section as said provides scale group VM information anf its details.

here is the another section showing scale takes which trigered and thier status , start time and completion time.

Auto Scale Group in Cloud Director 10.2.2 brings very important functionality nativily to cloud director which does not require any external configuration like vRealize Orchestrator or vRops and does not incure addional cost to tenant or provider. go ahead uprade your cloud director , enable it and let the tenant enjoy this cool functionality.

vSphere Tanzu with AVI Load Balancer

Featured

With the release of the vSphere 7.0 Update 2, VMware now adds new Load Balancer option for vSphere with Tanzu which provides production-ready load balancer option for your vSphere with Tanzu deployments. This Load Balancer is called NSX Advanced Load Balancer, or NSX ALB or AVI Load Balancer, This will provide Virtual IP addresses for the Supervisor Control Plane API server, the TKG guest cluster API servers and any Kubernetes applications that require a service of type Load Balancer. In this post, I will go through a step-by-step deployment of the new NSX ALB along with vSphere with Tanzu.

VLAN & IP address Planning

There are many way to plan IP, in this Lab I will place management ,VIPs and workload nodes on three different networks. For this deployment , I will be using three VLANs, One for Tanzu Management, One for Frontend or VIP and one for Supervisor cluster and TKG clusters , here is my IP Planning sheet:

Deploying & Configuring NSX ALB (AVI)

Now Lets deploy NSX ALB controller (AVI LB) by following very similar process that we follow to deploy any other OVA and I will be assigning NSX ALB IP management address from management network address range. The NSX ALB is available as an OVA. In this deployment, I am using version 20.1.4.The only information required at deployment time are

  • A static IP Address
  • A subnet mask
  • A default gateway
  • An sysadmin login authentication key

I have deployed one controller appliance for this Lab but if you are doing production deployment , it is recommended to create three node controller cluster for high availability and better performance.

Once OVA deployment completes, power on the VM and wait for some time before you browse NSX ALB url using the IP address provided while deployment and login to Controller and then:

  • Enter DNS Server Details and Backup Passphrase
  • Add NTP Server IP address
  • Provide Email/SMTP details ( not mandatory)

Next Choose VMware vCenter as your “Orchestrator Integration”, This creates a new cloud configuration in the NSX ALB called as Default-Cloud. Enter below details on next screen:

  • Insert IP of your vCenter,
  • vCenter Credential
  • Permission – Write Permission
  • SDN Integration – None
  • Select appropriate vCenter “Data Center”
  • For Default Network IP Address Management – Static

On next screen, we define the IP address pool for the Service Engines.

  • Select Management Network (on this Network “management interface” of “service engine” will get connected)
  • Enter IP Subnet
  • Enter Free IP’s in to IP Address Pool section
  • Enter Default Gateway

Select No for configuring multiple Tenants. Now we’re ready to get into the NSX ALB configuration.

Create IPAM Profile

IPAM will be used to assign VIPs to Virtual Services, Kubernetes control planes and applications running inside pods. to create IPAM Go to: Templates -> Profiles -> IPAM/DNS Profiles

  • Assign Name to the Profile , This IPAM will be for “frontend” Network
  • Select Type – “Avi Vantage IPAM
  • Cloud for Usable Network – Choose “Default-Cloud
  • Usable Network – Choose Port group, in my case “frontend” ( all vCenter port groups will get populated automatically by vCenter discovery)

Create and Configure DNS profile as below: ( This is optional)

Go to “Infrastructure” and click on “Cloud” and edit “Default Cloud” and update IPAM Profile and DNS profiles with the IPAM profile and DNS profile that we created above.

 Configure the VIP Network

On NSX ALB console , go to “Infrastructure” and then “Networks” , this will display all the network discovered by NSX ALB. Select “frontend” network and Click on Edit

  • Click on “Add Subnet
  • Enter subnet , in my case – 192.168.117.0/24
  • Click on Static IP Address pool:
  • Ensure “Use Static IP Address for VIPs and SE” is selected
    • and enter IP Segment Pool , in my case 192.168.117.100-192.168.117.200
    • Click on Save

Create New Controller Certificate

Default AVI certificate doesn’t contain IP SAN and can’t be used by vCenter/Tanzu to connect to AVI, so we need to create a custom controller and use it during Tanzu management plane deployment. let’s create controller certificate by going to Templates -> Security -> SSL/TLS Certificates -> Create -> Controller Certificate

Complete the Page with required information and make sure “Subject Alernative Name (SAN)” is NSX ALB controller IP/Cluster IP or hostname.

Then go to Administration -> Settings -> Access Settings and edit System Access Settings:

Delete all the certificates in SSL/TLS certificate filed and choose the certificate that we created in above section.

Go to Template->Security->SSL/TLS Certificates, Copy the certificate we created to use while enabling Tanzu Management plane

Configure Routing

Since the workload network (192.168.116.0/24) is on a different subnet from the VIP network (192.168.170.0/24), we need to add a static route in NSX ALB controller, Go the Infrastructure page, navigate to Routing and then to Static Route. Click the Create button and create static routes accordingly.

Enable Tanzu Control Plane (Workload Management)

I am not going to go through the full deployment of workload management, these are similar steps detailed HERE . However, there are a few steps that are different:

  • On page 6 Choose Type=AVI as your Load Balancer type.
  • there is no load balancer IP Address range required, this is now provided by the NSX ALB.
  • the Certificate we need to provide, should be of NSX ALB which we created in previous step.

The new NSX Advanced Load Balancer is far superior to the HA-Proxy specially in provider environment. The providers can deploy, offer and manage K8 clusters with VMware supported LB type even though the configuration requires a few additional steps, it is very simple to setup. The visibility provided into health and usage of the virtual services are going to be extremely beneficial for day-2 operations, and should provide great insights for those providers who are responsible for provisioning and managing Kubernetes distributions running on vSphere. Feel free to share any feedback…

Deploy Cloud Director orgVDC Edge API Way

Recently working with a provider who was needed some guidance on deploying orgVDC Edge API way so that their developer can use API way to automate customer orgVDC edge deployment for tenants. so here are are steps which i shared with him , sharing here too..lets get started..

First thing is we need to authenticate to Cloud Director, in this example, my credentials is the equivalent of cloud director System Administrator and I am connecting to provider session using Open API cloud director API and choose Authorization as basic with provider credential.

let’s do the post API call for getting an API session token.

POST - https://<vcdurl>/cloudapi/1.0.0/sessions/provider

Once you have authenticated, click on the Headers tab and you will see your Authorization key for this session. the Key “X-VMWARE-VCLOUD-ACCESS-TOKEN”, we will use this key to authenticate further API sessions using “Bearer” token

Next API Call is to get the current Edge gateways deployed by using above “Bearer Token”. this for me was the easiest way to reference required objects for next API call. (i deployed a sample edge in Org to get references)

Result of above API call , will give you lots of information that will be used in our next API call to create an Edge:

Now Lets create orgVDC edge by creating body for POST API Call, From above result take few mandatory filed values like:

  • uplinkID
  • Subnets/gateway/IP address will be as per your environment
  • orgVDC and Its ID
  • OwnerRef , name and its id
  • OrgRef, name and its id

Here is reference payload of my API call for orgVDCEdge creation

{
            "name": "avnish2",
            "description": "test avnish2",
            "edgeGatewayUplinks": [
                {
                    "uplinkId": "urn:vcloud:network:e0d2cd31-3034-489c-b318-cd36efe89729",
                    "subnets": {
                        "values": [
                            {
                                "gateway": "172.16.43.1",
                                "prefixLength": 24,
                                "ipRanges": {
                                    "values": [
                                        {
                                          "startAddress": "172.16.43.41",
                                          "endAddress": "172.16.43.50"
                                        }
                                    ]
                                },
                                "enabled": true,
                                "primaryIp": "172.16.43.2"
                            }
                        ]
                    },
                    "connected": true,
                    "quickAddAllocatedIpCount": null,
                    "dedicated": false,
                    "vrfLiteBacked": false
                }
            ],
            "orgVdc": {
                "name": "avnish-ovdc",
                "id": "urn:vcloud:vdc:45202398-51ee-4887-886a-1eecb9fdaa1c"
            },
            "ownerRef": {
                "name": "avnish-ovdc",
                "id": "urn:vcloud:vdc:45202398-51ee-4887-886a-1eecb9fdaa1c"
            },
            "orgRef": {
                "name": "Avnish",
                "id": "urn:vcloud:org:ddf6d02f-1552-49ac-8c25-30c36f63d5b6"
            }

        }

Do the Post to https://<vcdip>/cloudapi/1.0.0/edgeGateways

you should see “accepted” response , which confirms that Cloud Director will deploy OrgVDC edge with the information that you provided in the above post API call.

Once you got “accepted” response , if you go to Cloud Director , you should see above call should have deployed “OrgVDC Edge” for a specific organisation.

I hope this helps in case if you want to use API way to provision orgVDC Edge for a tenant.

SSL VPN with Cloud Director

Secure remote access to the cloud services is essential to cloud adoption and use. Cloud Director based cloud allows every tenant to use a dedicated edge gateway, providing a simple and easy-to-use solution that supports IPsec site-to-site virtual private networks (VPNs) backed by VMware NSX-T. Since NSX-T today does not support SSL VPN and this limitation requires providers or tenants to select alternative solutions, including open source or commercial, depending on the desired mix of features and support. Example of such solutions are OpenVPN or Wireguard.

In this blog post we will deploy openVPN as a tenant admin to allow access cloud resources from Cloud using VPN Client.

Create a new VDC network

Lets create a new routed Org VDC Network and we will deploy OpenVPN on this network, you can also deploy it on existing routed network.

  1. In Cloud Director go to Networking Section and Click on New to create a new router Network
  2. Select the appropriate Org VDC
  3. Select Type of Network as “Routed”
  4. Choose appropriate Edge for this routed network association
  5. Give the network an Gateway CIDR
  6. Create a pool of IPs for Network to allocate
  7. check the summary and click finish to create a network a routed netwrok

Creating NAT Rules

After creating routed network, go on to Edge gateway and open the edge gateway configuration:

Create two NAT rules for OpenVPN appliance:

  1. SNAT rule to allow OpenVPN appliance out bound Internet access
    1. OpenVPN Appliance IP to one External IP
  2. DNAT rule to allow OpenVPN appliance Inbound access from Internet
    1. External IP to OpenVPN appliance IP

You might have to open certain firewall rules to access OpenVPN admin console which depend on from where you are accessing the console.

NAT for Cloud Director Services

Since Cloud Director Service is managed service and its architecture is different then cloud providers environment, so for CDS, we need to follow few extra steps as explained below:

Deploy OpenVPN Appliance

  1. I have downloaded the latest OpenVPN appliance from Here: -https://openvpn.net/downloads/openvpn-as-latest-vmware.ova
  2. I have uploaded in to a catalog , Select from Catalog and Click on Create vAPP

Provide a Name and Description

Select the Org VDC

Edit VM configuration

This is very important step , make sure you choose:

  1. Switch to the advance networking workflow
  2. Change IP assignment to “Manual
  3. assign a valid IP Manually from the range which we had created during network creation, if you are not putting IP here then on appliance you need to struggle for IP assignment etc..

Review and finish, This will deploy the OpenVPN appliance, once deployed power on the appliance. Now time to configure the appliance…

OpenVPN Initial Configuration

In VMware Cloud Director go to Compute and click on Virtual Machines and open the console of your OpenVPN virtual machine.

Log in to the VM as the root user and for Password – Click on “Guest OS Customization” and click on Edit

Copy the password which is in the section “Specify password“, and use this password to login to OpenVPN virtual machine

After login, you will be prompted to answer few questions:

Licence Agreement:  as usual , you do not have choice, Enter “yes”

Will this be the primary Access Server node?: Enter “Yes”

Please specify the network interface and IP address to be used by the Admin Web UI:  If the guest customizations were applied correctly, this should default to eth0, which should be configured with an IP address on the network you selected during deployment.

Please specify the port number for the Admin Web UI: Enter your desired port number, or accept the default of 943.

Please specify the TCP port number for the OpenVPN Daemon: Use default “443”

Should client traffic be routed by default through VPN?: Choose “No”.

if you enter “yes”, it will not allow client device from accessing any other networks while the VPN is connected.

Should client DNS traffic be routed by default through the VPN?: Choose “No” if your above answer is “No”.

Use local authentication via internal DB? : Enter Yes or No based on your choice of authentication.

Should private subnets be accessible to clients by default? : Enter “yes” this will enable your cloud networks accessible via the VPN.

Do you wish to login to the Admin UI as “openvpn”?: Answer “yes” which will create a local user account named as “openvpn” If you answer no, you’ll need to set up a different user name and password.

Please specify your Activation key: If you’ve purchased a licence, enter the licence key, otherwise leave this blank.

If using default “openvpn” account, after installing the OpenVPN Access Server for the first time, you are required to set a password for the “openvpn” user at the command line interface with the command “#passwd openvpn” and then use that to login at the Admin UI. This completes the deployment of appliances, now you can browse the config page, on which we will configure SSL VPN specific Options:

This section shows you whether the VPN Server is currently ON or OFF. Based on the current status, you can either Start the Server or Stop the Server with the button you see there.

Inside VPN Network settings , specify network settings which applies to your configuraiton.

Create users or configure Other authentication methods, i am creating a sample user to access cloud resources based on the permissions.

Tenant User Access

Tenant user access public IP of SSL VPN that we had assigned initially and login with credentials

Once Login , User has choice to download the VPN Client as well as Connection profiles , these connection profiles will have login information for the user.

Incase if user sees private IP in profile, (@192.168.10.101), then click on pen icon to edit the profile

Once user has edit the profile, he can successfully connect to Cloud.

While Label OpenVPN

In case provider wants to while label the OpenVPN, he can easily do this by following the simple procedure:

  1. Copy an Logo to OpenVPN appliance and edit file – nano /usr/local/openvpn_as/etc/as.conf
  2. Add below line after #sa.company_name line for company logo
    1. sa.logo_image_file=/usr/local/openvpn_as/companylogo.png
  3. uncomment sa.company_name and change it to your specific text desired for Company Name for changing company name
  4. To hide footer, below the sa.company_name and/or sa.logo_image_file variables, add the following:cs.footer=hide
  5. Save and exit the file, then restart the OpenVPN Access Server:service openvpnas restart

This completes the installation/configuration and white Labelling of the OpenVPN and these configure steps applies to VMware Cloud Director and Cloud Director service’s tenant portal on VMware Cloud on AWS, Please share feedback if any.

Tanzu Basic – Building TKG Cluster

Featured

In Continuation to our Tanzu Basic deployment series , this is the last part and by now we have our vSphere with Tanzu cluster enabled and deployed, now the next step would be to create Tanzu Kubernetes Clusters. In case if you missed previous posts , here they are:

  1. Getting Started with Tanzu Basic
  2. Tanzu Basic – Enable Workload Management

Create a new namespace

vSphere Namespaces is kind of a resource pool or a container that i can give to a project, team or customer a “Kubernetes+VM environment” where they can create and manage their application containers and virtual machines. They can’t see the other’s environment and they can’t expand past their limits set by Administrators. The vSphere Namespace construct allows the vSphere admin to set several policies in one place. The user/team/customer/project can create new workloads to their desire within their vSphere Namespace. You also set resources limits to the namespace and permissions so that DevOps engineers can access it. Let’s create our first name space by going to vCenter Menu and Click on “Workload Management”

Once you are in “Workload Management” place , click on “CREATE NAMESPACE”

Select the vSphere Cluster on which you enabled “workload management”

  1. Give DNS compliant name of the namespace
  2. Select Network for the namespace

Now we have successfully created “namespace” named “tenant1-namespace”

Next step is to Add Storage. here we need to choose a vCenter Storage policy which TKG will use to provisition control plane VMs as well as this policy will show up as a Kubernetes Storage Class for this namespace.The persistent volume claims that correspond to persistent volumes can originate from the Tanzu Kubernetes cluster.

After you assign the storage policy, vSphere with Tanzu creates a matching Kubernetes storage class in the Namespace. For the VMware Tanzu Kubernetes Clusters, the storage class is automatically replicated from the namespace to the Kubernetes cluster. When you assign multiple storage policies to the namespace, a separate storage class is created for each storage policy.

Access Namespace

Share the Kubernetes Control Plane URL with DevOps engineers as well as the user name they can use to log in to the namespace through the Kubernetes CLI Tools for vSphere. You can grant access to more than one namespace to a DevOps engineer.

Developer browse the URL and downloads TKG CLI plugin for their environment (Windows, Linux or MAC)

To provision Tanzu Kubernetes clusters by using the Tanzu Kubernetes Grid Service, we connect to the Supervisor Cluster by using the vSphere Plugin for kubectl which we downloaded from above step and authenticate with your vCenter Single Sign-On credentials, which was given by vSphere admin to developer.

After you log in to the Supervisor Cluster, the vSphere Plugin for kubectl generates the context for the cluster. In Kubernetes, a configuration context contains a cluster, a namespace, and a user. You can view the cluster context in the file .kube/config. This file is commonly called the kubeconfig file.

I am switching to “tenant1-namespace” context as i have access to multiple name spaces , similarly devops user can switch to context by following command.

Below commands to explore and help you to find out right VM type for Kubernetes Clusters sizing:

#kubectl get sc

This command will list down all the storage classes

#kubectl get virtualmachineimages

This command will list down all the VM images available for creating TKG clusters, this will help you decide the Kubernetes version that you want to use

#kubectl get virtualmachineclasses

This command will list all the machine classes (T-Shirt sizes) available for TKG clusters

Deploy a TKG Cluster

To deploy a TKG cluster we need to create a YAML file with the required configuration parameters to define the cluster.

  1. Above YAML provisions a cluster with a three control plane nodes and three worker nodes.
  2. The apiVersion and kind parameter values are constants.
  3. The Kubernetes version, listed as v1.18, is resolved to the most recent distribution matching that minor version.
  4. The VM class best-effort-<size> has no reservations. For more information, see Virtual Machine Class Types for Tanzu Kubernetes Clusters.

Once file is ready, lets provision the Tanzu Kubernetes cluster using the following kubectl command:

Monitor cluster provisioning using the vSphere Client , TKG management plane creating Kubernetes cluster automatically

Verify cluster provisioning using the following kubectl commands.

you can continue to monitor/verify cluster provisioning using the #kubectl describe tanzukubernetescluster command, at the last of the command output , it shows:

Node Status – It shows nodes status from Kubernetes prospective

VM Status – It shows nodes status from vCenter prospective

After around 15/20 minutes, you should see VM & Node Status as ready and it will also show the Phase as Running.This completes deployment of Kubernetes cluster on vSphere7 with Tanzu and we successfully deployed a Kubernetes cluster, now lets deploy a application and expose to external world.

Deploy an Application

To deploy your first application we need to login to new cluster that we created , you can use below command”

#kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME --tanzu-kubernetes-cluster-name CLUSTER-NAME --tanzu-kubernetes-cluster-namespace NAMESPACE

Once login completed, we can deploy application workloads to Tanzu Kubernetes clusters using pods, services, persistent volumes, and higher-level resources such as Deployments and Replica Sets. lets deploy an application using below comand:

#kubectl run --restart=Never --image=gcr.io/kuar-demo/kuard-amd64:blue kuard

Command now has successfully deployed application, lets expose so that we can access it using VMware HA proxy Load Balancer:

#kubectl expose pod kuard --type=LoadBalancer --name=kuard --port=8080 

Application exposed successfully, lets get the public IP which has been assigned to application by the above command , so here is the external IP – 192.168.117.35

Let’s access the application using the IP assigned to application and see if we can easily access the application.

Get Visibility of Cluster using Octant

Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.Installation is pretty simple and detailed Here

This completes the series with the installation of TKG Kubernetes cluster and run an application on top of it and accessing that application using HA proxy. !!Please share your feedback if any!!

Tanzu Basic – Enable Workload Management

Featured

In continuation to last post where we had deployed VMware HA proxy, now we will enable a vSphere cluster for Workload Management, by configuring it as a Supervisor Cluster.

Part-1- Getting Started with Tanzu Basic – Part1

What is Workload Management

With Workload Management we can deploy and operate the compute, networking, and storage infrastructure for vSphere with Kubernetes. vSphere with Kubernetes transforms vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Kubernetes provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools

Since we selected creating a Supervisor Cluster with the vSphere networking stack in previous post that means vSphere Native Pods will not be available but we can create Tanzu Kubernetes clusters.

Pre-Requisite

As per our HA proxy deployment , we chosen HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Below are the pre-requisite to enable Workload Management

  • DRS and HA should be enabled on the vSphere cluster, and ensure DRS is in the fully automated mode.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Storage Policy: Create a storage policy for the placement of Kubernetes control plane VMs.
    • I have created policy two policies named “basic” & “TanzuBasic”
    • NOTE: You should created policy with lower case policy name
    • This policy has been created with Tag based placement rules
  • Content Library: Create a subscribed content library using URL: https://wp-content.vmware.com/v2/latest/lib.json on the vCenter Server to download VM image that is used for creating nodes of Tanzu Kubernetes clusters. The library will contain the latest distributions of Kubernetes.
  • Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for Workload Networks

Deploy Workload Management

With the release of vSphere 7 update 1 a free trial of Tanzu is available for 60 day evaluation . Enter your details to receive communication from VMware and get started with Tanzu

 Next screen takes you to choose networking options available with vCenter, make sure

  • You choose correct vCenter
  • For networking there are two networking stack, since we haven’t installed NSX-T it will be greyed out and unavailable, choose “vCenter Server Network” and move to “Next”

On next screen you will be presented with vSphere Clusters which are compatible for Tanzu, incase you don’t see any cluster, go on to “Incompatible” section and click on cluster which will give you guidance for the reason of incompatible, go back and fix the reason and try again

Select the size of the resource allocation you need for the Control Plane. For the evaluation, Tiny or Small should be enough and click on Next.

Storage: Select the storage policy which we created as per of pre-requisite and click on Next

Load Balancer: This section is very important and we need to ensure that we provide correct values:

  • Enter a DNS-compliant, don’t use “under-score” in the name
  • Select the type of Load Balancer: “HA Proxy”
  • Enter the Management data plane IP Address. This is our management ip and port number assigned to VMware HA proxy management interface.In our case it is 192.168.115.10:5556.
  • Enter the username and password used during deployment for the HA Proxy
  • Enter the IP Address Ranges for Virtual Server. we need to provide the IP ranges for virtual servers, these are the ip-address which we had defined in the frontend network. It’s the exact same range which we used during deployment of HA-proxy configuration process, but this time we will have to write full range instead of using a CIDR format, in this case i am using: 192.168.117.33-192.168.117.62
  • Finally, enter in the Server CA cert. If you have added a cert during deployment, you would use that. If you have used a self-signed cert then you can retrieve that data from the VM by browsing /etc/haproxy/ca.crt.

Management Network: Next portion is to configure IP address for Tanzu supervisor control plane VM’s, this will be from management IP range.

  • We will need 5 consecutive IPs free from Management IP range, Starting IP Address this is the first IP in a range of five IPs to assign to Supervisor control plane VMs’ management network interfaces.
  • One IP is assigned to each of the three Supervisor control plane VMs in the cluster
  • One IP is used for a Floating IP which we will use to connect to Management plane
  • One IP is reserved for use during upgrade process
  • This will on mgmt port group

Workload Network:

Service IP Address: we can take the default network subnet for “IP Address for Services. change this if you are using this subnet anywhere else. This subnet is for internal communication and it not routed.

And the last network in which we will define the Kubernetes node IP range, this applies to both the supervisor cluster as well as the guest TKG clusters. This range will be from workload IP range which we had created in the last post with vLAN 116.

  • Port Group – workload
  • IP Address Range – 192.168.116.32-192.168.116.63

Finally choose the content library which we had created as a part of pre-requisite

if you have provided right information with correct configuration, it will take around 20 minutes to install and configure entire TKG management plane to consume. you might see few errors while configuring Management plane but you can ignore as those operations will be retried automatically and errors will get clear when that particular task get succeed.

NOTE-Above screenshot has different cluster name as i have taken it from different environment but IP schema is same.

I hope this article helps you to enable your first “Workload Management” vSphere cluster without NSX-T. Next Blog post i will cover deployment of TKG Clusters and others things around that…

Getting Started with Tanzu Basic

In the process of modernize your data center to run VMs and containers side by side, Run Kubernetes as part of vSphere with Tanzu Basic. Tanzu Basic embeds Kubernetes in to the vSphere control plane for the best administrative control and user experience. Provision clusters directly from vCenter and run containerized workloads with ease. Tanzu basic is the most affordable and has below components as part of Tanzu Basic:

To Install and configure Tanzu Basic without NSX-T, at high level there are four steps which we need to perform and I will be covering all the steps in three blog posts:

  1. vSphere7 with a cluster with HA and DRS enabled should have been already configured
  2. Installation of VMware HA Proxy Load Balancer – Part1
  3. Tanzu Basic – Enable Workload Management – Part2
  4. Tanzu Basic – Building TKG Cluster – Part3

Deploy VMware HAProxy

There are few topologies to setup Tanzu Basic with vSphere based networking, for this blog we will deploy the HAProxy VM with three virtual NICs, which means there will be one “Management” network , one “Workload” Network and another one will be “frontend” network which will be used by DevOps users and external services will also access HAProxy through virtual IPs on this Frontend network.

NetworkUse
ManagementCommunicating with vCenter and HA Proxy
WorkloadIP assigned to Kubernetes Nodes
Front EndDevOps uses and External Services

For This Blog, I have created three VLAN based Networks with below IP ranges:

NetworkIP RangeVLAN
tkgmgmt192.168.115/24115
Workload192.168.116/24116
Frontend192.168.117/24117

Here is the topology diagram , HAProxy has been configured with three nics and each nic is connected to VLAN that we created above

NOTE– if you want to deep dive on this Networking refer Here , This blog post describe it very nicely and I have used the same networking schema in this Lab deployment.

Deploy VMware HA Proxy

This is not common HA Proxy, it is customized one and its Data Plane API designed to enable Kubernetes workload management with Project Pacific on vSphere 7.VMware HAProxy deployment is very simple, you can directly access/download OVA from Here and follow same procedure as you follow for any other OVA deployment on vCenter, there are few important things which I am covering below:

On Configuration screen , choose “Frontend Network” for three NIC deployment topology

Now Networking section which is heart of the solution, on this we align above created port groups to map to the Management, Workload and Frontend networks.

Management network is on VLAN 115, this is the network where the vSphere with Tanzu Supervisor control plane VMs / nodes are deployed.

Workload network is on vLAN 166, where the Tanzu Kubernetes Cluster VMs / nodes will be deployed.

Front end network which is on vLAN 117, this is where the load balancers (Supervisor API server, TKG API servers, TKG LB Services) are provisioned. Frontend network and workload network must need to route to each other for the successful wcp enablement.

Next page is most important and here we will have VMware HAProxy appliance configuration. Provide a root password and tick/untick for root login option based on your choice. The TLS fields will be automatically generated if left blank.

In the “network config” section, provide network details about the VMware HAProxy for the management network, the workload network and frontend/load balancer network. These all require static IP addresses, in the CIDR format. You will need to specify a CIDR format that matches the subnet mask of your networks.

For Management IP: 192.168.115.5/24 and GW:192.168.115.1

For Workload IP: 192.168.116.5/24 and GW:192.168.116.1

For Frontend IP: 192.168.117.5/25 and GW:192.168.117.1 . this is not optional if you had selected Frontend in “configuration” section.

In Load Balancing section, enter the Load Balancer IP ranges. these IP address will be used as Virtual IPs by the load balancer and these IP will come from Frontend network IP range.

Here I am specifying 192.168.117.32/27 , this segment will give me 30 address for VIPs for Tanzu management plane access and application exposed for external consumption.Ignore “192.168.117.30” in the image back ground.

Enter Data plane API management port. Enter: 5556 and also enter a username and password for the load balancer data plane API

Finally review the summary and click finish. this will deploy VMware HAProxy LB appliance

Once deployment completed, power on the appliance and SSH in to the VM using the management plane IP and check if all the interfaces are having correct IPs:

Also check if you can ping Front end ip ranges and other Ip ranges also. stay tuned for Part2.

Load Balancer as a Service with Cloud Director

Featured

NSX Advance Load Balancer’s (AVI) Intent-based Software Load Balancer provides scalable application delivery across any infrastructure. AVI provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based approach.

With the release of Cloud Director 10.2 , NSX ALB is natively integrated with Cloud Director to provider self service Load Balancing as a Service (LBaaS) where providers can release load balancing functionality to tenants and tenants consume load balancing functionality based on their requirement. In this blog post we will cover how to configure LBaaS.

Here is High Level workflow:

  1. Deploy NSX ALB Controller Cluster
  2. Configure NSX-T Cloud
  3. Discover NSX-T Inventory,Logical Segments, NSGroups (ALB does it automatically)
  4. Discover vCenter Inventory,Hosts, Clusters, Switches (ALB does it automatically)
  5. Upload SE OVA to content library (ALB does it automatically, you just need to specify name of content library)
  6. Register NSX ALB Controller, NSX-T Cloud and Service Engines to Cloud Director and Publish to tenants (Provider Controlled Configuration)
  7. Create Virtual Service,Pools and other settings (Tenant Self Service)
  8. Create/Delete SE VMs & connect to tenant network (ALB/VCD Automatically)

Deploy NSX ALB (AVI) Controller Cluster

The NSX ALB (AVI) Controller provides a single point of control and management for the cloud. The AVI Controller runs on a VM and can be managed using its web interface, CLI, or REST API but in this case Cloud Director.The AVI Controller stores and manages all policies related to services and management. To ensure AVI controllers High Availability we need to deploy 3 AVI Controller nodes to create a 3-node AVI Controller cluster.

Deployment Process is documented Here & Cluster creation Process is Here

Create NSX-T Cloud inside NSX ALB (AVI) Controller

NSX ALB (AVI) Controller which uses APIs to interface with the NSX-T manager and vCenter to discover the infrastructure.here is high level activities to configure NSX-T Cloud in NSX ALB management console:

  1. Configure NSX-T manager IP/URL (One per Cloud)
  2. Provide admin credentials
  3. Select Transport zone (One to One Mapping – One TZ per Cloud)
  4. Select Logical Segment to use as SE Management Network
  5. Configure vCenter server IP/URL (One per Cloud)
  6. Provide Login username and password
  7. Select Content Library to push SE OVA into Content Library

Service Engine Groups & Configuration

Service Engines are created within a group, which contains the definition of how the SEs should be sized, placed, and made highly available. Each cloud will have at least one SE group.

  1. SE Groups contain sizing, scaling, placement and HA properties
  2. A new SE will be created from the SE Group properties
  3. SE Group options will vary based upon the cloud type
  4. An SE is always a member of the group it was created within in this case NSX-T Cloud
  5. Each SE group is an isolation domain
  6. Apps may gracefully migrate, scale, or failover across SEs in the groups

​Service Engine High Availability:

Active/Standby

  1. VS is active on one SE, standby on another
  2. No VS scaleout support
  3. Primarily for default gateway / non-SNAT app support
  4. Fastest failover, but half of SE resources are idle

​Elastic N + M 

  1. All SEs are active
  2. N = number of SEs a new Virtual Service is scaled across
  3. M = the buffer, or number of failures the group can sustain
  4. SE failover decision determined at time of failure
  5. Session replication done after new SE is chosen
  6. Slower failover, less SE resource requirement

Elastic Active / Active 

  1. All SEs are active
  2. Virtual Services must be scaled across at least 2 Service engines
  3. Session info proactively replicated to other scaled service engines
  4. Faster failover, require more SE resources

Cloud Director Configuration

Cloud Director Configuration is two fold, Provider Config and Tenant Config, lets first cover provider Config…

Provider Configuration

Register AVI Controller: Provider administrator login as a admin and register AVI Controller with Cloud Director. provider has option to add multiple AVI controllers.

NOTE – incase if you are registering with NSX ALB’s default self sign certificate and if it throws error while registering , then regenerate self sign certificate in NSX ALB.

Register NSX-T cloud

Now next thing is we need to register NSX-T cloud with Cloud Director, which we had configured in ALB controller:

  1. Selecting one of the registered AVI Controller
  2. Provide a meaning full name to the controller
  3. Select the NSX-T cloud which we had registered in AVI
  4. Click on ADD.

Assign Service Engine groups

Now register service engine groups either “Dedicated” or “Reserved” based on tenant requirement or provider can have both type of groups and assign to tenant based on requirements.

  1. Select NSX-T Cloud which we had registered above
  2. Select the “Reservation Model”
    1. Dedicated Reservation Model:- For each tenant Organization VDC Edge gateway, AVI will create two Service Engine nodes for each LB enabled Org VDC Edge GW.
    2. Shared Reservation Model:- Shared is elastic and shared among all tenants. AVI will create pool of service engines that are going to be shared across tenant. Capacity allocation is managed in VCD, Avi elastically deploys and un-deploys service engines based on usage

Provider Enables and Allocates resources to Tenant

Provider enables LB functionality in the context of Org VCD Edge by following below steps:

  1. Click on Edges 
  2. Choose Edge on which he want to enable load balancing
  3. Go to “Load Balancer” and click on “General Settings”
  4. Click on “Edit”
  5. Toggle on to Activate to activate the load balancer
  6. Select Service Specification

Next step is to assign Service Engines to tenant based on requirement, for that go to Service Engine Group and Click on “ADD” and add one of the SE group which we had registered previously to customer’s one of the Edge.

Provider can restrict usage of Service Engines by configuring:

  1. Maximum Allowed: The maximum number of virtual services the Edge Gateway is allowed to use.
  2. Reserved: The number of guaranteed virtual services available to the Edge Gateway.

Tenant User Self Service Configuration

Pools: Pools maintain the list of servers assigned to them and perform health monitoring, load balancing, persistence.

  1. Inside General Settings some of the key settings are:
    1. Provide Name of the Pool
    2. Load Balancing Algorithm
    3. Default Server Port
    4. Persistence
    5. Health Monitor
  2. Inside Members section:
    1. Add Virtual Machine IP addresses which needs to be load balanced
    2. Define State, Port and Ratio
    3. SSL Settings allow SSL offload and Common Name Check

Virtual Services: A virtual service advertises an IP address and ports to the external world and listens for client traffic. When a virtual service receives traffic, it may be configured to:

  1. Proxy the client’s network connection.
  2. Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.
  3. Forward the client’s request data to the destination pool for load balancing.

Tenant choose Service Engine Group which provider has assigned to tenant, then choose Load Balancer Pool which we created in above step and most important Virtual IP This IP address can be from External IP range of the Org VDC or if you want Internal IP , then you can use any IP.

So in my example, i am running two virtual machines having Org VDC Internal IP addresses and VIP is from external public IP address range, so if I browse VIP , i can reach to web servers sucessfully using VCD/AVI integration.

This completes basic integration and configuration of LBaaS using Cloud Director & NSX Advance Load Balancer. feel free to share feedback.

VMware Cloud Director Two Factor Authentication with VMware Verify

In this post, I will be configuring  two-factor authentication (2FA) for VMware Cloud Director using Workspace ONE Access formally know as VMware Identity Manager (vIDM). Two-factor authentication is a mechanism that checks username and password as usual, but adds an additional security control before users are authenticated. It is a particular deployment of a more generic approach known as Multi-Factor Authentication (MFA).Throughout this post, I will be configuring VMware Verify as that second authentication.

What is VMware Verify ?

VMware Verify is built in to Workspace ONE Access (vIDM) at no additional cost, providing a 2FA solution for applications.VMware Verify can be set as a requirement on a per app basis for web or virtual apps on the Workspace ONE launcher OR to login to Workspace ONE to view your launcher in the first place. The VMware Verify app is currently available on iOS and Android.VMware Verify supports 3 methods of authentication:

  1. OneTouch approval
  2. One-time passcode via VMware Verify app (soft token)
  3. One-time passcode over SMS

By using VMware Verify, security is increased since a successful authentication does not depend only on something users know (their passwords) but also on something users have (their mobile phones), and for a successful break-in, attackers would need to steal both things from compromised users.

1. Configure VMware Verify

First you need to download and install “VMware Workspace ONE Access“, which is very simple to deploy using ova. VMware Verify is provided as-a-service, and thus, it does not require to install anything on-premise server. To enable VMware Verify, you must contact VMware support. They will provide you a security token which is all you need to enable the integration with VMware Workspace One Access (vIDM).Once you get the token, login into vIDM as an admin user and go to:

  1. Click on the Identity & Access Management tab
  2. Click on the Manage button
  3. Select Authentication Methods
  4. Click on the configure icon (pencil) next to VMware Verify
    1. undefined
  5. A new window will pop-up, on which you need to select the Enable VMware Verify checkbox, enter the security token provided by VMware support, and click on Save.
    1. undefined
    2. undefined

2. Create a Local Directory on VMware Workspace One Access

VMware Workspace ONE Access not only supports Active Directories , LDAP Directories but also supports other types of directories, such as local directories and Just-in-Time directories.For this Lab , i am going to create a local directory using local directory feature of Workspace One Access,Local users are added to a local directory on the service. we need to manage the local user attribute mapping and password policies. You can create local groups to manage resource entitlements for users.

  1. Select the Directories tab
  2. Click on “Add Directory”
    1. undefined
  3. Specify Directory and domain name (this is same domain name i have registered for VMware Verify
    1. undefined

3. Create/Configure a built-in Identity Provider

Once the second authentication factor is enabled as described on steps 1 and 2, it must next be added as an authentication method to a Workspace one access built-in provider. If in your environment already exists one, you can re-configure it. Alternatively, you can create a new built-in identity provider as explained below.Login to Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Identity Providers link
  4. Click on the Add Identity Provider button and select Create Built-in IDP
    1. undefined
  5. Enter a name describing the Identity Provider (IdP)
  6. Which users can authenticate using the IdP – In the example below I am selecting the local directory that i had created above.
  7. Network ranges from which users will be directed to the authentication mechanism described on the IdP
  8. The authentication methods to associate with this IdP – Here I am selecting VMware Verify as well as Local Directory.
  9. Finally click on the Add button
    1. undefined

4. Update Access Policies on Workspace One Access

The last configuration step on Workspace One Access (vIDM) is to update the default access policy to include the second factor authentication mechanism. For that, login into Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Policies link
  4. Click on the Edit Default Policy button
  5. This will open up new page showing the details of the default access policy. Go to “Configuration” and Click on “ALL RANGES”.

A new window will pop-up. Modify the settings right below the line “then the user may authenticate using:”

  1. Select Password as the first authentication method – This way users will have to enter their ID and password as defined on the configured Local Directory
  2. and Select second authentication mechanism. here I am adding VMware Verify – This will make that after a successful password authentication, users will get a notification on their mobile phones to accept or deny the login request.
  3. I am leaving empty the line “If preceding Authentication Method fails or is not applicable, then:” – This is because I don’t want to configure any fallback authentication mechanism or you can choose based on your choice.

5. Download the app in your mobile and register a user from an Cloud Director Organization

  1. Access the app provider on your mobile phone. Search for VMware Verify and download it.
    1. undefined
  2. Once it is downloaded, open the application. It will ask for your mobile number and e-mail address. Enter your domain details. On the screenshot below, I’m providing my mobile number and an e-mail which is only valid in my lab.After clicking OK, you will be provided two options for verifying your identity:
    1. undefined
  1. Receiving and SMS message – SMS will have registration code which will allow you to enter in to APP along with registration code.
    1. undefined
  2. Receiving a Phone Call – after clicking on this option, the app will show a registration code you will need to type on the phone pad once you receive the call
  3. Since i am using SMS way to doing it , it will ask you to Enter the code which you have received in SMS Manually (XopRcVjd4u2)
    1. undefinedundefined
  4. Once your identity has been verified, you will be asked to protect the app by setting a PIN number. After that, the app will show there are not accounts configured yet.
    1. undefinedundefined
  5. Click on Account and add the account
    1. undefinedundefined

Immediately after that, we will start receiving tokens on the VMware Verify mobile app. so at this moment, you are ready to move to the next step.

6. Enable VMware Cloud Director Federation with VMware Workspace ONE Access

There are three authentication methods that are supported by vCloud Director:

Local: local users which are created at the time of installing vCD or while creating any new organization.

LDAP service:  LDAP service enables the organisations to use their own LDAP servers for authentication. Users can then be imported into vCD from the configured LDAP.

SAML Identity Provider: A SAML Identity Provider can be used to authenticate users in organisations. SAML v2.0 metadata is required for the service to be configured. The metadata must include the location of the single sign-on service, the single logout service, and the X.509 certificate for the service. In this post we will be using federation between VMware Workspace One Access with VMware Cloud Director.

So, let’s go ahead and login to VMware Cloud Director Organization and go to “Administration” and Click on “SAML”

  1. Enable Federation by setting “Entity ID” to any other unique string , in this case i am setting “org name” , in my case my org name is “abc”
  2. Then click on “Generate” to generate a new certificate and click “SAVE”
    1. undefined
  3. Download Metadata from the link , It will download file “spring_saml_metadata.xml“. This activity can be performed by system or Org Administrator.
    1. undefined
  4. In VMware Workspace ONE Access(VIDM) admin console, go to “Catalog” and create new web application.
    1. Write application name, description and upload nice icon and choose category.
    1. undefined
  5. In the next screen keep Authentication Type SAML 2.0 and paste the xml metadata downloaded in step #1 into the URL/XML window. Scroll down to Advanced Properties. 
    1. undefined 
  6. In Advanced Properties we will keep the defaults but add Custom Attribute Mappings which describe how VIDM user attributes will translate to VCD user attributes. Here is the list:
    1. undefined
  7. Now we can finish the wizard by clicking next, select access policy (keep default) and reviewing the Summary on the next screen.
    1. undefined
  8. Next we need to retrieve metadata configuration of VIDM – this is by going back to Catalog and clicking on Settings. From SAML Metadata download Identity Provider (IdP) metadata.
    1. undefined
  9. Now we can finalize SAML configuration in vCloud Director. on Federation page Toggle Use SAML Identity Provider button to enable it and import the downloaded metadata (idp.xml) with Browse and Upload buttons and click Apply.
    1. undefined
  10.  we first need to import some users/groups to be able to use SAML. You can import VMware Workspace ONE Access(VIDM) users by their user name or group. We can also assign role to the imported user.
    1. undefined

This completes the federation process between VMware Workspace ONE Access (VIDM) and VMware Cloud Director. For More details you can refer This Blog Post.

Result – Cloud Director Two Factor Authentication in Action

Lets your tenant go to browser and browse their tenant URL, they will get atomically redirected to VMware Workspace ONE Access page for authentication:

  1. User enters user name and password and if user get successfully authenticated , if moves to 2FA
  2. on the next step, user gets a notification on thier mobile phones
    1. undefined
  3. Once user approves the authentication on the phone , VMware Workspace ONE Access allows access to user based on the role given on VMware Cloud Director.

On-Board a New User

  1. Create a new User in VMware Workspace and also add him to application access.
    1. undefined
    2. undefined
  2. User gets an email to setup his/her password, user must configure his/her password.
    1. undefined
  3. Administrator login to Cloud Director and Import newly created user from SAML with a Cloud Director role
    1. undefined
    2. undefined
  4. User browses cloud URL and after user logs in to portal with user id and password, he/she asked to provide mobile number for second factor authentication.
    1. undefinedundefined
  5. After entering mobile number , if user has installed “VMware Verify” app , he/she get notification for Approve/Deny or if app has not been installed , click on “Sign in with SMS” , user will receive an SMS , enter that SMS for second factor authentication.
    1. undefinedundefinedundefined
  6. Once user enters the passcode received on his/her cell phone, VMware Workspace One Access allow user to login to cloud director.
    1. undefined

This completes the installation and configuration of VMware Verify with VMware Cloud Director. you can add additional things like branding of your cloud etc.. which will give this your cloud identity.

Upgrade Tanzu Kubernetes Grid

Tanzu Kubernetes Grid make it very simple to upgrade Kubernetes clusters , without impacting availability of control plane and also ensures rolling update for worker nodes.we just need to run two commands to upgrade Tanzu Kubernetes Grid: “tkg upgrade management-cluster” and tkg upgrade cluster CLI commands to upgrade clusters that we deployed with Tanzu Kubernetes Grid 1.0.0. In this blog post we will upgrade Tanzu Kubernetes Grid from version 1.0.0 to 1.1.0

Pre-Requisite

In this post we are going to upgrade TKG from version 1.0.0 to version 1.1.0 and to start upgrading we need to download the new versions of “TKG” client cli , Base OS Image Template and API server load balancer.

  1. Download and Install new version of “TKG” cli on your client VM
    1. Download and upload For Linux – VMware Tanzu Kubernetes Grid CLI 1.1.0 Linux.
    2. Unzip using
      1. #gunzip tkg-linux-amd64-v1.1.0-vmware.1.gz
    3. unzip file named will be tkg-linux-amd64-v1.1.0-vmware.1
    4. move and rename to “tkg” using
      1. #mv ./tkg-linux-amd64-v1.1.0-vmware.1 /usr/local/bin/tkg
    5. Make it executable using
      1. #chmod +x /usr/local/bin/tkg
    6. Run #tkg version , this should show updated version of “tkg cli” client command line
    7. undefined
  2. Download the new version
    1. Download OVA for Node VMs: photon-3-kube-v1.18.2-vmware.1.ova
    2. Download OVA for LB VMs: photon-3-haproxy-v1.2.4-vmware.1.ova
    3. Once download,Deploy these OVA using “Deploy OVF template” on vSphere
    4. When OVA deployment completed, right click the VM and select “Template” and click on “Convert to Template”

Upgrade TKG Management Cluster

As you know management cluster is a purpose-built for operating the Tanzu platform and managing the lifecycle of Tanzu Kubernetes clusters. so we need to first upgrade Tanzu Management Cluster before we upgrade our Kubernetes clusters.This is the most seamless upgrade of Kubernetes cluster, i have ever done, so let’s get in to:

  1. First list TKG management cluster which are running in my environment using, command will display the information of tkg management clusters
    1. #tkg get management Cluster
    2. undefined
  2. Once got the name of management cluster , run below command to proceed with upgrade
    1. #tkg upgrade management-cluster <management-cluster-name>
    2. undefined
    3. Upgrade process first upgrades the Cluster API providers for vSphere, then it upgrades the K8s version in control plane and worker node of management cluster.
    4. undefined
  3. if everything goes fine, it should take less than 30 minutes to complete the upgrade process of management plane.
    1. undefined
  4. Now if you run #tkg get cluster –include-management-cluster , it should show upgraded version of management cluster.
    1. undefined

Upgrade Tanzu Kubernetes Cluster

Now since our management plane is upgraded. lets go ahead and upgrade Tanzu Kubernetes Clusters.

  1. Here also process is same as we did for management cluster run below command to proceed with upgrade
    1. #tkg upgrade cluster <cluster-name>
    2. if the cluster is not running in”default” namespace, specify “–namespace” option (when TKG is part of vSphere7 with Kubernetes)
    3. the upgrade process will upgrade Kubernetes version across your control and worker virtual machines.
    4. Once done , you must see successful upgrade of Kubernetes clusters.
  2. Now login with your credential and see the Kubernetes version your cluster is running.

This completes the upgrade process, such an easy process of upgrading Kubernetes clusters without impacting availability of cluster.

Configuring Ingress Controller on Tanzu Kubernetes Grid

Featured

Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour supports dynamic configuration updates and multi-team ingress delegation out of the box while maintaining a lightweight profile.In this blog post i will be deploying Ingress controller along with Load Balancer (LB was deployed in this post).you can also expose Envoy proxy as node port which will allow you to access your service on each k8s node.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting

13

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Here on How to Deploy TKG)
  • kubectl configured with admin access to your cluster
  • You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions which can be downloaded from here

Install Contour Ingress Controller

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer.To install Contour follow below steps:

  • Downloaded VMware Tanzu Kubernetes Grid Extensions Manifest 1.1.0 in the pre-requisite stage ,move that to your Client VM and unzip it.
    • 1
  • You deploy Contour and Envoy directly on Tanzu Kubernetes clusters. You do not need to deploy Contour on management clusters.
  • Set the context of kubectl to the Tanzu Kubernetes cluster on which to deploy Contour.
    • #kubectl config use-context avnish-admin@avnish
  • First Install Cert-Manager on the k8 cluster
    • kubectl apply -f tkg-extensions-v1.1.0/cert-manager/
    • 2
  • Deploy Contour and Envoy on the cluster using:
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/vsphere/
    • 3

This completes installation of Contour Ingress Controller on Tanzu Kubernetes Cluster.let’s deploy an application and test the functionality.

Deploy a Sample Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the application

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is available within the same folder which we have downloaded from VMware inside example folder. Let’s deploy the application:

  • Run below command to deploy application which will create a new namespace named “test-ingress” , 2 services and one deployment.
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/common
    • 4

Very simple way of installing the application, now lets create Ingress resource.

Create Ingress Resource

Let’s imagine a scenario where the “foo” team owns http://www.foo.bar.com/foo and “bar” team owns http://www.foo.bar.com/bar. considering this scenario:

  • Here is Ingress Resource Definition for our example application:
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: https-ingress
        namespace: test-ingress
        labels:
          app: hello
      spec:
        tls:
        - secretName: https-secret
          hosts:
            - foo.bar.com
        rules:
        - host: foo.bar.com
          http:
            paths:
            - path: /foo
              backend:
                serviceName: s1
                servicePort: 80
            - path: /bar
              backend:
                serviceName: s2
                servicePort: 80
  • Lets deploy it using below command:
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/https-ingress
    • 5
    • Check the status and grub the External IP address of Contour “envoy” proxy.
    • 8
    • Add an /etc/hosts entry to above IP addresses to foo.bar.com

Test the Application

To access the application, browse the foo and the bar services from your desktop which has access to service network.

  • if you browse bar, you will get bar service responding
    • 9
  • if you browse foo, you will get foo service responding
    • 7

This completes the installation and configuration of Ingress on VMware Tanzu Kubernetes Grid K8 cluster. Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here and when customer chooses to Taznu portfolio , they get Contour as supported version from VMware.

Ingress on Cloud Director Container Service Extension

In this blog post i will be deploying Ingress controller  along with Load Balancer (LB was deployed in previous post) in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

11

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Deployment Options for provider specific details)
  • kubectl configured with admin access to your cluster
  • RBAC must be enabled on your cluster

Install Contour

To install Contour, Run:

  • #$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  • 1

This command creates:

  • A new namespace projectcontour
  • A Kubernetes Daemonset running Envoy on each node in the cluster listening on host ports 80/443
  • Now we need to retrieve the external address of the load balancer assigned to Contour by our Load Balancer that we deployed in previous port. to get the LB IP run this command:
    • 2
  • “External IP” is of the range to IP addresses that we had given in LB config , we will NAT this IP on VDC Edge gateway to access this from outside or internet.
    • 3

Deploy an Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is hosted at Github , can be downloaded from Here. Once downloaded:

  • Create the coffee and the tea deployments and services using
  • Create a secret with an SSL certificate and a key
  • Create an Ingress resource

This completes the deployment of the application.

Test the Application

To access the application, browse the coffee and the tea services from your desktop which has access to service network. you will also need to add hostname/ip in to /etc/hosts file or your DNS server

  • To get Coffee:
    • 8
  • If your prefer Tea:
    • 10

This completes the installation and configuration of Ingress on VMware Cloud Director Container Service Extension, Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here.

Load Balancer for Cloud Director Container Service Extension

In this blog post i will be deploying Load Balancer in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is LB in Kubernetes ?

To understand load balancing on Kubernetes, we must need to understand some Kubernetes basics:

  • A “pod” in Kubernetes is a set of containers that are related in terms of their function, and a “service” is a set of related pods that have the same set of functions. This level of abstraction insulates the client from the containers themselves. Pods can be created and destroyed by Kubernetes automatically, Since every new pod is assigned a new IP address, IP addresses for pods are not stable; therefore, direct communication between pods is not generally possible. However, services have their own IP addresses which are relatively stable; thus, a request from an external resource is made to a service rather than a pod, and the service then dispatches the request to an available pod.

An external load balancer will apply logic that ensures the optimal distribution of these requests. In order to create Load Balancer, your clusters must be deployed in to cloud director based cloud and follow below steps to configure load balancer for your kubernetes cluster:

                                                 18

METALLB Load Balancer

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters. for more information on this, refer here

Pre-requisite

MetalLB requires the following pre-requisite to function:

  • A CSE Kubernetes cluster, running Kubernetes 1.13.0 or later.
  • A cluster network configuration that can coexist with MetalLB.
  • Some IPv4 addresses for MetalLB to hand out.
  • Here is my CSE cluster info , this cluster i will be using for this Demo:
    • 1

Metallb Load Balancer Deployment

Metallb deployment is very simple three step process, follow below steps:

  • Create a new namespace as below:
    • #kubectl create ns metallb-system
  • Below command will deploy MetalLB to your cluster, under the metallb-system namespace.
    • #kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
    • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
    • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
    • 3
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
    • #kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
    • 4

NOTE– I am accessing my CSE cluster using NAT , thats the reason i am using “–insecure-skip-tls-verify”

Metallb LB Layer2 Configuration

The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The specific configuration depends on the protocol you want to use to announce service IPs. Layer 2 mode is the simplest to configure and in many cases, you don’t need any protocol-specific configuration, only IP addresses.

  • Following configuration map gives MetalLB control over IPs from 192.168.98.220 to 192.168.98.250, and configures Layer 2 mode:
    • apiVersion: v1
      kind: ConfigMap
      metadata:
        namespace: metallb-system
        name: config
      data:
        config: |
          address-pools:
          - name: default
            protocol: layer2
            addresses:
            - 192.168.98.220-192.168.98.250

This completes installation and configuration of load balancer, let’s go ahead and publish the application using kubernetes “type=LoadBalancer”.  cse and metallb will take everything.

Deploy an Application

Before deploying application, i want to show the my cloud director network topology where these container workload is getting deployed and kubernetes services are getting created.So here we have one Red segment (192.168.98.0/24)  for container workload where CSE has deployed kubernetes worker nodes and on same network we have deployed our “Metallb” load balancer.

Kubernetes pods will be created on weave networks , which is internal software defined networking for CSE and services will be exposed using Load Balancer which is configured with “OrgVDC” network.

17

let’s get started ,so we are going to use “guestbook” application, which uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. let’s go ahead and deploy it:

NOTE – All above steps in detail has been covered in Kubernetes.io website – here is Link

Accessing Application

To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:

  • #kubectl get services
    • 14
  • Go back to your organization VDC and create a NAT rule, so that service can be access using routable/public IP.
    • 16
  • Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
    • 15

Very easy and simple process to deploy and access your containerised applications which  are running on most secure and easy VMware Cloud Director. go ahead and start running containerised apps with upstream kubernetes on Cloud Director.

Stay tuned!!! , in next post i will be deploying Ingress on CSE cluster.

Installing Tanzu Kubernetes Grid

This blog post helps you to create Tanzu Kubernetes Grid  Clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 infrastructure.

NOTETanzu Kubernetes Grid Plus is the only supported version on VMware Cloud on AWS. you can deploy Kubernetes clusters on your VMC clusters using Tanzu Kubernetes Grid Plus. Please refer to KB 78173 for detailed support matrix. 

Pre-requisite

On your vSphere/VMware Cloud on AWS instance ensure that you have the following objects in place:

  •  A resource pool in which to deploy the Tanzu Kubernetes Grid Instance (TKG)
  • A VM folder in which to collect the Tanzu Kubernetes Grid VMs (TKG)
  • Create a network segment with DHCP enabled
  • Firewall rules on compute segment.
  • This is not must but i prefer a Linux based Virtual Machine called “cli-vm” which we will be using as a “Bootstrap Environment” to install Tanzu Kubernetes Grid CLI binaries for Linux

Installing Kubectl

Kubectl is a command line tool for controlling Kubernetes clusters that we will use for managing/controlling k8 cluster deployed by TKG . To install latest version of Kubectl follow below steps on “cli-vm”:

#curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
#chmod +x kubectl
#mv kubectl /usr/local/bin

1

Installing Docker

Docker is a daemon-based container engine which allows us to deploy applications inside containers, since my VM is Cent OS , i started the docker service , to start docker service , follow below steps on cli-vm:

#systemctl start docker
#systemctl enable docker

To view the status of the daemon, run the following command:

#systemctl status docker

Install Tanzu Kubernetes Grid CLI

To use Tanzu Kubernetes Grid, you download and run the Tanzu Kubernetes Grid CLI on local system, in our case we will install on our “cli-vm”

  • Get the tkg cli binary from GA build page like:
    • For Linux platforms, download tkg-linux-amd64-v1.0.0_vmware.1.gz.
    • For Mac OS platforms, download tkg-darwin-amd64-v1.0.0_vmware.1.gz.
  • Unzip the file:
    • gunzip ./tkg-linux-amd64-v1.0.0_vmware.1.gz
    • Copy the unzip binary to /usr/local/bin:
    • chmod +x . /tkg-linux-amd64-v1.0.0_vmware.1
    • mv ./tkg-linux-amd64-v1.0.0_vmware.1 /usr/local/bin/tkg
  • Check tkg env is ready:
    • # tkg version
      Client:
      Version: v1.0.0
      Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e
  • /root/.tkg folder will be auto created for tkg config file

4

Create an SSH Key Pair

In order for Tanzu Kubernetes Grid VMs to run tasks in vSphere, you must provide the public key part of an SSH key pair to Tanzu Kubernetes Grid when you deploy the management cluster. You can use a tool such as ssh-keygen to generate a key pair.

  • On the machine on which you will run the Tanzu Kubernetes Grid CLI, run the following ssh-keygen command.
    • #ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt
    • Enter file in which to save the key (/root/.ssh/id_rsa):
    •  press Enter to accept the default.
  • Enter and repeat a password for the key pair.
  • Open the file .ssh/id_rsa.pub in a text editor copy and paste it when you deploy the management cluster.

Import OVA & Create Template in VC

Before we can deploy a Tanzu Kubernetes Grid management cluster or Tanzu Kubernetes clusters to vSphere, we must provide a base OS image template to vSphere. Tanzu Kubernetes Grid creates the management cluster and Tanzu Kubernetes cluster node VMs from this template. Tanzu Kubernetes Grid provides a base OS image template in OVA format for you to import into vSphere. After importing the OVA, you must convert the VM into a vSphere VM template

  • Export TKG needs 2 ova, which are photon-3-v1.17.3+vmware.2.ova and photon-3-capv-haproxy-v0.6.3+vmware.1.ova.
  • Convert above both ova vms to templates and put them to vm folder.
  • Here are high level steps:
    • In the vSphere Client, right-click on cluster & select Deploy OVF template.
    • Choose Local file, click the button to upload files, and navigate to the photon-3-v1.17.3_vmware.2.ova file on your local machine.
    • Follow the on screen instruction to deploy VM from the OVA template.
      • choose the appliance name
      • choose the destination datacenter or folder
      • choose the destination host, cluster, or resource pool
      • Accept EULA
      • Select the disk format and datastore
      • Select the network for the VM to connect to
      • Click Finish to deploy the VM
      • Right-click on the VM, select Template and click on Convert to Template

    • Follow the same step for proxy OVA – photon-3-capv-haproxy-v0.6.3+vmware.1.ova

Installing TKG Management Cluster

Once pre-work is done , follow below steps to create Tanzu Management cluster:

  • On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the following command
    • #tkg init –ui
  • if your cli vm running X11 desktop then it will open browser with loop back IP , if not you can setup using putty on your windows desktop , like this:
    • 27
  • Once you have successfully opened the connection, open you web browser and navigate to http://127.0.0.1:8081 and you should be seeing below page
    • 5
  • Enter your IAAS provider details where TKG can create K8s cluster and in this case we need to enter VMC vCenter Server information and then click “Connect” button, you will get notification etc accept all and then “connect button” becomes “connected” from here you just need to select the details about where do you want to deploy TKG, here fill the Datacenter and SSH Key which we created in previous steps.
    • 67
  • Select Development or Production Flavour and specify an instance type, then give the K8s Management Cluster a name and select API server Load Balancer (specify the HA Proxy VM Template, which we have uploaded in previous step)
    • 8
  • Select Resource Pool(TKG), VM Folder(TKG) and WorkloadDatastore
    • 9
  • Select Network Name and leave the other as default
    • 10
  • Select the K8s PhotonOS Template , this is the VM template that we uploaded in previous steps.
    • 11
  • Review all settings to ensure they match as per your selection and then click on “Deploy Management Cluster” , which will begin the deployment process….
    • 12
  • This takes around 5 to 6 minutes to complete the entire process of setting up TKG management cluster and once the Management Cluster has been deployed, you can go close the web browser and go back to your SSH session and stop the TKG UI rendering.
    • 1314
  • you can verify that all pods are up & running by running the command:
    • #kubectl get pods -A
    • 15

Deploy Tanzu Kubernetes workload Cluster

so let’s deploy Tanzu Kubernetes Cluster as  we have completed Tanzu Kubernetes grid Management cluster, we can use the “TKG” CLI to deploy Tanzu Kubernetes clusters from your management cluster to vSphere/VMware Cloud on AWS.Run the following command to deploy TKG Cluster called “avnish” or any other name you wish to use.

  • #tkg create cluster avnish –plan=dev

above command will deploy a Tanzu Kubernetes cluster with the minimum default configuration.I am also deploying another cluster by specifying few more parameters

Deploy High Available Kubernetes Cluster

This command will deploy high available kubernetes cluster.

  • #tkg create cluster my-cluster –plan dev –controlplane-machine-count 3 –worker-machine-count 5
    • 16
  • Once the TKG Cluster is up and running , we can run the following commands to get cluster information and credentials:

    • #tkg get clusters
    • #tkg get credential <clustername>
    • 17
  • To get the context of TKG clusters , run normal Kubernetes commands:
    • #kubectl config get-contexts
    • #kubectl config use-context avnish-admin@avnish
    • 1819
  • To get the kubernetes node details , run kubernetes commands…
    • 20

Scale TKG Clusters

After we create a Tanzu Kubernetes cluster, we can scale it up or down by increasing or reducing the number of node VMs that it contains.

  • Scale up – Scale a cluster, use the
    • #tkg scale cluster
    • to change the number of control plane nodes  –controlplane-machine-count 
    • to change the number of worker nodes –worker-machine-count
    • 21
    • 22
    • 23

Installing Octant on TKG Clusters

Octant is an open source developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster and its applications, it is simple to install and easy to use.

  • Here is  more information on Installation and configuration
  • you need to install octant on your cli VM and then proxy it so that you can open its web interface.
    • octant2octant1

This completes the installation and configuration of Tanzu Kubernetes grid, once you have management cluster ready, go ahead and deploy containerised applications on these clusters. TKG gives lots of flexibility of deploying , scaling and managing multiple TKG workload cluster and can be given based on department/projects.

Install and Configure Cloud Director App Launchpad

In continuation of my last post on the same topic, in this post we will deploy and configure Cloud Director App Launchpad.

With App Launchpad VMware cloud providers can now deliver their own catalog based applications or VMware Cloud Marketplace certified 3rd party Cloud Applications, and Bitnami catalog applications directly to customers through a simple catalog interface from a VMware Cloud Director plugin. This capability allows Cloud Providers to deliver application Platform as a Service to customers who needn’t know anything about the supporting infrastructure for the catalog applications they deploy.

NOTE – In this release Tenant using App Launchpad 1.0, can launch single-VM applications.

Prerequisites for App Launchpad Installation

Before we install and configure App Launchpad, it requires few external components and supports specific versions that you must deploy and configure.

  • Create a new Virtual Machine with below requirement
    • 14
  • Ensure Rabbit MQ is installed and configured under Cloud Director extensibility before deploying App Launchpad.
  • Inside same Rabbit MQ Server create a new Exchange with type as “direct” and a dedicated AMQP user that has full permissions to the virtual host of the AMQP broker.

    • This slideshow requires JavaScript.

Install Cloud Director App Launchpad

Deployment of  App Launchpad can be done by installing an RPM package on a dedicated Linux virtual machine.Download Application Launchpad from here and transfer the file to ALP server and installation is very simple process:

  • Open an SSH connection to the installation target Linux virtual machine and log in by using a user account with sufficient privileges to install an RPM package.
  • Install the RPM package by running the installation command.
    • yum install -y vmware-vcd-alp-1.0.0-1593616.x86_64.rpm
    • 16

Connect App Launchpad with Cloud Director

To configure App Launchpad with Cloud Director, we will use the alp command line utility. By using this utility:

  • We will establish a connection between App Launchpad and VMware Cloud Director
  • Define or create the App-Launchpad-Service account
  • and install the App Launchpad user interface plug-in for VMware Cloud Director.
  • The alp connect command also configures App Launchpad with your AMQP broker.
  • #alp connect --sa-user alpadmin --sa-pass <PASSWORD> --url https://10.96.98.50 --admin-user admin@system --admin-pass <PASSWORD> --amqp-user alp --amqp-pass <PASSWORD> force --amqp-exchange alpext
  • Accept “EULA” and “certificate”
    • 17
  • if you have put correct information then it should show successfully configured
  • Restart ALP service using
    • #systemctl restart alp
  • You can run #alp show to verify the connection
    • 18

Configure App Launchpad

  • Now you can go to Cloud Director and check installed ALP plugin in Cloud Director.
    • 5
  • Click on “LAUNCH SETUP” to configure it to offer Applications as a Service
  • If you want to configure the infrastructure for App Launchpad automatically, select Yes and software will setup everything automatically.
    • n14
  • In case you chosen “No i will set it up on my own”, pre-requisite you need to setup manually.
    • n15
  • App Launchpad supports the use of applications from the Bitnami applications catalog that is available in the VMware Cloud Marketplace.

    You can also create catalogs of your custom, in-house applications and configure App Launchpad to work with these catalogs.

  • Create sizing templates for the applications.
    1. Enter a name for the sizing template.
    2. Enter a vCPU count, a memory size (in GB), and a disk size (in GB)
  • To complete the initial configuration of App Launchpad, click Finish.
    • This slideshow requires JavaScript.

  • If everything is goes fine and have enough resources in Cloud Director , you will see “App Launchpad Setup Complete”
    •  19

Onboarding Bitnami Applications

VMware Cloud providers can import applications from the Bitnami applications catalog that is available in the VMware Cloud Marketplace. To begin, provider must log in to the VMware Cloud Marketplace and subscribe to the Bitnami application you wish to deploy. Follow these steps:

  • Log in to the VMware Cloud Marketplace.
  • From the “Catalog” page, find the Bitnami application you wish to deploy (With App Launchpad 1.0, tenant users can only run single-VM applications) and select it to subscribe.
  • On the “Settings” page, choose “VCD” as the platform and select the correct version. Set the subscription type to “BYOL”. Click “Next” to proceed.

    • This slideshow requires JavaScript.

  • The subscription will now be added to your Cloud Director App Launchpad organization , which tenants can use it.
  • Make sure the “App Launchpad” organization has right permission.
    • n16

Onboarding In-house Applications

Cloud Provider can also add your own in-house applications to the content library of the “AppLaunchpad" provider organization and upload your applications manually , to do so

  • Provider admin need to Navigate to the “Content Libraries -> vApp Templates” page and click on “NEW”..
    • t2
  • By default, in-house applications neither has logo nor has summary.
    • t3

To give these apps better user experience, service provider can set metadata on vApp templates by GUI or vCloud API , here is GUI Way to do so:

  • Go to Content Library and click on application which you have recently updated and go to metadata and click on “Edit” and add following items:
    • t4
    • title – Title of Application
    • summary – Summary of Application which will be displayed on Application tile.
    • Description – Description of Application
    • version – Displays version number of Application.
    • logo – Provider can choose a logo using Internal/External web location like S3.
    • screenshot – Provider can choose default snapshot using Internal HTTP/HTTPs server or External web location like S3
  • t5

This completes the installation and configuration of Cloud Director App Launchpad and as i said in my last post – App Launchpad is a free component for VMware Cloud Director, and doesn’t necessitate the use of Bitnami catalogs, providers can use their own appliances, so go ahead and give it a try, start delivering a PAAS like solutions to your Tenants.

 

 

 

 

VMware Cloud Director Encryption -PartIII

In the Part-1 & Part-2 we configured HyTrust KeyControl Cluster & vCenter, In this post we will configure Cloud director to utilize what we have configured till now…

Attach Storage Policy to Provider VDC

To update the information in the vCloud Director database about the VM storage policies which we had created in underlying vSphere environment, we must refresh the storage policies of the vCenter Server instance.

  • Login to Cloud Director with cloud admin account and go to vSphere resources and choose vCenter on which we had created policies and click on “REFRESH POLICIES”
    • 7
  • You can add a VM storage policy to a provider virtual data center, after which you can configure organization virtual data centers backed by this provider virtual data center to support the added storage policy.
    • Login to Cloud Director, go to Provider VDCs and choose PVDC which is backed by the cluster where we had created storage policies.
    • Click on “ADD”
    • 8
  • Choose the Policy that we created in previous post.
    • 9
    • 10

Attach Storage Policy to Organization VDC

You can configure an organization virtual data center to support a VM storage policy that you previously added to the backing provider virtual data center.

  • Now click on Organization VDCs, and click the name of the target organization virtual data center like 
    • 11
  • Click the Storage tab, and click Add.
    • 12
  • You can see a list of the available additional storage polices in the source provider virtual data center
  • Select the check boxes of one or more storage policies that you want to add, and click Add.
    • 13

Self Services Tenant Consumption

When Provider’s tenant try to create a VM/vAPP (A virtual machine can exist as a standalone machine or it can exist within a vApp) , he can use the encryption policy that we have created previously.

  • This is new VM creation wizard from template , Tenant user must choose “use custom storage policy” and select the “encryption policy”
    • 14
  • Once VM is provisioned , user can go and check the Storage policy by clicking on VM.
    • 15
  • User can also go in to “Hard Disk” section of VM and check disk policy.
    • 16

Encrypt Named Disks

Named disks are standalone virtual disks that you create in Organization VDCs.When you create a named disk, it is associated with an Organization VDC but not with a virtual machine. After you create the disk in a VDC, the disk owner or an administrator can attach it to any virtual machine deployed in the VDC. The disk owner can also modify the disk properties, detach it from a virtual machine, and remove it from the VDC. System administrators and organization administrators have the same rights to use and modify the disk as the disk owner.

  • Here we will create a new encrypted “Named Disk” by choosing storage policy as “Encryption Policy”.
    • 17
  • Cloud Director allow users to connect these named disks
  • Click the radio button next to the name of the named disk that you want to attach to a virtual machine, and click Attach
  • From the drop-down menu, select a virtual machine to which to attach the named disk, and click Apply.
    • 1819

This competes three part Cloud Director encryption configuration and use by the tenants , this features enables VMware Cloud Providers new offering and monetisation opportunities, go ahead , deploy and start offering additional/deferential services.