Getting Started with VMware Cloud Director Container Service Extension 4.0

Featured

VMware Cloud Director Container Service Extension brings Kubernetes as a service to VMware Cloud Director, offering multi-tenant, VMware supported, production ready, and compatible Kubernetes services with Tanzu Kubernetes Grid. As a service provider administrator, you can add the service to your existing VMware Cloud Director tenants. By using VMware Cloud Director Container Service Extension, customers can also use Tanzu products and services such as Tanzu® Mission Control to manage their clusters.

Pre-requisite for Container Service Extension 4.0

  • Provider Specific Organization – Before you can configure VMware Cloud Director Container Service Extension server, it is must to create an organization to hosts VMware Cloud Director Container Service Extension server
  • Organization VDC within Organization – Container Service extension Appliance will be deployed in this organization virtual data center
  • Network connectivity – Network connectivity between the machine where VMware Cloud Director Container Service Extension is installed, and the VMware Cloud Director server. VMware Cloud Director Container Service Extension communicates with VMware Cloud Director using VMware Cloud Director public API endpoint
  • CSE 4.0 CPI automatically creates Load balancer, you must ensure that you have configured  NSX Advanced Load Balancer, NSX Cloud, and NSX Advanced Load Balancer Service Engine Group for tenants who need to create Tanzu Kubernetes Cluster.

Provider Configuration

With the release of VMware Cloud Director Container Service Extension 4.0, service providers can use the CSE Management tab in the Kubernetes Container Clusters UI plug-in, which demonstrate step by step process to configure the VMware Cloud Director Container Service Extension server.

Install Kubernetes Container Clusters UI plug-in for VMware Cloud Director

You can download the Kubernetes Container Clusters UI plug-in for the VMware Cloud Director Download Page and upload the plug-in to VMware Cloud Director.

NOTE: If you have previously used the Kubernetes Container Clusters plug-in with VMware Cloud Director, it is necessary to deactivate it before you can activate a newer version, as only one version of the plug-in can operate at one time in VMware Cloud Director. Once you activate a new plug-in, it is necessary to refresh your Internet browser to begin using it.

Once partner has installed plugin, The Getting Started section with in CSE Management page help providers to learn and set up VMware Cloud Director Container Service Extension in VMware Cloud Director through the Kubernetes Container Clusters UI plug-in 4.0. At very High Level this is Six Step process:

Lets start following these steps and deploy

Step:1 – This section links to the locations where providers can download the following two types of OVA files that are necessary for VMware Cloud Director Container Service Extension configuration:

NOTE- Do not download FIPS enabled templates

Step:2 – Create a catalog in VMware Cloud Director and upload VMware Cloud Director Container Service Extension OVA files that you downloaded in the step:1 into this catalog

Step:3 – This section initiates the VMware Cloud Director Container Service Extension server configuration process. In this process, you can enter details such as software versions, proxy information, and syslog location. This workflow automatically creates a Kubernetes Clusters rights bundle, CSE Admin Role role, Kubernetes Cluster Author role, and VM sizing policies. In this process, the Kubernetes Clusters rights bundle and Kubernetes Cluster Author role are automatically published to all tenants as well as following Kubernetes resource versions will be deployed

Kubernetes ResourcesSupported Versions
Cloud Provider Interface (CPI)1.2.0
Container Storage Interface (CSI)1.3.0
CAPVCD1.0.0

Step:4 – This section links to the Organization VDCs section in VMware Cloud Director, where you can assign VM sizing policies to customer organization VDCs. To avoid resource limit errors in clusters, it is necessary to add Tanzu Kubernetes Grid VM sizing policies to organization virtual data centers.The Tanzu Kubernetes Grid VM sizing policies are automatically created in the previous step. Policies created are as below:

Sizing PolicyDescriptionValues
TKG smallSmall VM sizing policy2 CPU, 4 GB memory
TKG mediumMedium VM sizing policy2 CPU, 8 GB memory
TKG largeLarge VM sizing policy4 CPU, 16 GB memory
TKG extra-largeX-large VM sizing policy8 CPU, 32 GB memory

NOTE: Providers can create more policies manually based on requirement and publish to tenants

In VMware Cloud Director UI, select an organization VDC, and from the left panel, under Policies, select VM Sizing and Click Add and then from the data grid, select the Tanzu Kubernetes Grid sizing policy you want to add to the organization, and click OK.

Step:5 – This section links to the Users section in VMware Cloud Director, where you can create a user with the CSE Admin Role role. This role grants administration privileges to the user for VMware Cloud Director Container Service Extension administrative purposes. You can use this user account as OVA deployment parameters when you start the VMware Cloud Director Container Service Extension server.

Step:6 – This section links to the vApps section in VMware Cloud Director where you can create a vApp from the uploaded VMware Cloud Director Container Service Extension server OVA file to start the VMware Cloud Director Container Service Extension server.

  • Create a vApp from VMware Cloud Director Container Service Extension server OVA file.
  • Configure the VMware Cloud Director Container Service Extension server vApp deployment lease
  • Power on the VMware Cloud Director Container Service Extension server.

Container Service Extension OVA deployment

Enter a vApp name, optionally a description, runtime lease and storage lease (should be no lease so that it does not suspend automatically), and click Next.

Select a virtual data center and review the default configuration for resources, compute policies, hardware, networking, and edit details where necessary.

  • In the Custom Properties window, configure the following settings:
    • VCD host: VMware Cloud Director URL
    • CSE service account’s username The username: CSE Admin user in the organization
    • CSE service account’s API Token: to generate API Token, login to provider session with CSE user which you created in Step:5 and then go to “User Preferences” and click on “NEW” in “ACCESS Tokens” section (When you generate an API Access Token, you must copy the token, because it appears only once. After you click OK, you cannot retrieve this token again, you can only revoke it. )

  • CSE service account’s org: The organization that the user with the CSE Admin role belongs to, and that the VMware Cloud Director Container Service Extension server deploys to.
  • CSE service vApp’s org: Name of the provider org where CSE app will be deployed

In the Virtual Applications tab, in the bottom left of the vApp, click Actions > Power > Start. this completes the vApp creation from the VMware Cloud Director Container Service Extension server OVA file. This task is the final step for service providers to perform before the VMware Cloud Director Container Service Extension server can operate and start provisioning Tanzu Kubernetes Clusters.

CSE 4.0 is packed with capabilities that address and attract developer personas with an improved feature set and simplified cluster lifecycle management. Users can now build and upgrade versions, resize, and delete K8s clusters directly from the UI making it simpler and faster to accomplish tasks than before. This completes provider section of Container Service extension in next blog post i will write about Tenant workflow.

Advertisement

VMware Cloud Director Charge Back Explained

Featured

VMware Chargeback not only enables metering and chargeback capabilities, but also provides visibility into infrastructure usage through performance and capacity dashboards for the Cloiud Providers as well as tenants.

To help Cloud Providers and tenants realise more value for every dollar they spend on infrastructure (ROI) (and in turn provide similar value to their tenants), our focus is to not only expand the coverage of services that can be priced in VMware Chargeback, but also to provide visibility into the cost of infrastructure to providers, and billing summary to organizations, clearly highlighting the cost incurred by various business units. but before we dive in further to know what’s new with this release, please note:

  • vRealize Operations Tenant App is now rebranded to VMware Chargeback.
  • VMware Chargeback is also now available as a SaaS offering, The Software-as-a-Service (SaaS) offering will be available as early access, with limited availability, with the purchase or trial of the VMware Cloud Director™ service. See, Announcing VMware Chargeback for Managed Service Providers Blog.

Creation of pricing policy based on chargeback strategy

Provider administrator can create one or more pricing policies based on how they want to chargeback their tenants. Based on the vCloud Director allocation models, each pricing policy is of the type, Allocation pool, Reservation pool, or Pay-As-You-Go

NOTE – The pricing policies apply to VMs at a minimum granularity of five minutes. The VMs that are created and deleted within the short span of five minutes will still be charged.

CPU Rate

Provider can charge the CPU rate based on GHz or vCPU Counts

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the number of vCPUs used
  • Fixed Cost Fixed costs do not depend on the units of charging

Memory Rate

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied, values are: Usage, Allocation and Maximum from usage and allocation
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the memory allocated
  • Fixed Cost Fixed costs do not depend on the units of charging

Storage Rate

You can charge for storage either based on storage policies or independent of it.

  • This way of setting rates will be deprecated in the future release and it is advisable to instead use the Storage Policy option.
  • Select the Storage Policy Name from the drop-down menu.
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied. You can charge for used storage or configured storage of the VMs
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Add Slab you can optionally charge different rates depending on the storage allocated

Network Rate

Enter the External Network Transmit and External Network Receive rates.

Note: If your network is backed by NSX-T, you will be charged only for the network data transmit and network data receive.

  • Network Transmit Rate select the Change Period and enter the Default Base Rate as well as using slabs, you can optionally charge different rates depending on the network data consumed
  • Network Receive Rate select the Change Period and enter the Default Base Rate. as well as using slabs, you can optionally charge different rates depending on the network data consumed. Enter valid numbers for Base Rate Slab and click Add Slab.

Advanced Network Rate

Under Edge Gateway Size, enter the base rates for the corresponding edge gateway sizes

  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

Guest OS Rate

Use the Guest OS Rate to charge differently for different operating systems

  • Enter the Guest OS Name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

Cloud Director Availability

Cloud Director Availability is to set pricing for replications created from Cloud Director Availability

  • Replication SLA Profile – enter a replication policy name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

You can also charge for the storage consumed by replication objects in the Storage Usage Charge section.This is used to set additional pricing for storage used by Cloud Director Availability replications in Cloud Director. Please note that the storage usage defined in this tab will be added additionally to the Storage Policy Base Rate

vCenter Tag Rate

This section is used for Any additional charges to be applied on the VMs based on their discovered Tags from vCenter. (Typical examples are Antivirus=true, SpecialSupport=true etc)

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

VCD Metadata Rate

Use the VCD Metadata Rate to charge differently for different metadata set on vApps

NOTE- Metadata based prices are available in bills only if Enable Metadata option is enabled in vRealize Operations Management Pack for VMware Cloud Director.

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

One Time Fixed Cost

One time fixed cost used to charge for One time incidental charges on Virtual machines, such as creation/Setup charges, or charges for one off incidents like installation of a patch. These costs do not repeat on a recurring basis.

For values follow VCD METADATA and vCenter Tag section.

Rate Factors

Rate factors are used to either bump up or discount the prices either against individual resources consumed by the Virtual Machines, or whole charges against the Virtual Machine. Some examples are:

  • Increase CPU rate by 20% (Factor 1.2) for all VMs tagged with CPUOptimized=true
  • Discount overall charge on VM by 50% (Factor 0.5) for all Vms tagged with PromotionalVM=True
  • VCD Metadata
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number
  • vCenter Tag
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number

Tanzu Kubernetes Clusters

This section will be used to charge for Tanzu K8s clusters and objects.

  • Cluster Fixed Cost
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost Fixed costs do not depend on the units of charging
  • Cluster CPU Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per ghz)
  • Cluster Memory Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per gb)

Additional Fixed Cost

You can use Additional Fixed Cost section to charge at the Org-VDC level. You can use this for charges such as overall tax, overall discounts, and so on. The charges can be applied to selective Org-VDCs based on Org-VDC metadata.

  • Fixed Cost
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost
  • VCD Metadata – enter the Tag Key and Tag Value
  • VCD Metadata One Time – enter the Tag Key and Tag Value

Apply Policy

Cloud Director Charge Back provides flexibility to the Service Providers to map the created pricing policies with specific organization vDC. By doing this, the service provider can holistically define how each of their customers can be charged based on resource types.

Bills

Every tenant/customer of service provider can see/review their bills using the Cloud Director Charge Back app. Service Provider administrator can generate bills for a tenant by selecting a specific resource and a pricing policy that must be applied for a defined period and can also log in to review the bill details.

This completes the feature demonstration available with Cloud Director Charge back. Go ahead and deploy and add native charge back power to your Cloud. 

Cross-Cloud Disaster Recovery with VMware Cloud on AWS and Azure VMware Solution

Featured

Disaster Recovery is an important aspect of any cloud deployment. It is always possible that an entire cloud data center or region of the cloud provider goes down. This has already happened to most cloud providers like Amazon AWS, Microsoft Azure, Google Cloud and will surely happen again in future. Cloud providers like Amazon AWS, Microsoft Azure and Google Cloud will readily suggest that you have a Disaster Recovery and Business Continuity strategy that spans across multiple regions, so that if a single geographic region goes down, business can continue to operate from another region. This only sounds good in theory, but there are several issues in the methodology of using the another region of a single cloud provider. Some of the key reasons which I think that single cloud provider’s Cross-Region DR will not be that effective.

  • A single Cloud Region failure might cause huge capacity issues for other regions used as DR
  • Cloud regions are not fully independent , like AWS RDS allows read replicas in other regions but one wrong entry will get replicated across read replicas which breaks the notion of “Cloud regions are independent
  • Data is better protected from accidental deletions when stored across clouds. For Example what if any malicious code or an employee or cloud providers employee runs a script which deletes all the data but in most cases this will not impact cross cloud.

In this blog post we will see how VMware cross cloud disaster recovery solution can help customers/partners to overcome BC/DR challenges.

Deployment Architecture

Here is my deployment architecture and connectivity:

  • One VMware Cloud on AWS SDDC
  • One Azure VMware Solution SDDC
  • Both SDDC’s are connected over MegaPort MCR

Activate VMware Site Recovery on VMware Cloud on AWS

To configure site recovery on VMware Cloud on AWS SDDC, go to SDDC page, click on the Add Ons tab and under the Site Recovery Add On, Click the ACTIVATE button

In the pop up window Click ACTIVATE again

This will deploy SRM on SDDC, wait for it to finish.

Deploy VMware Site Recovery Manager on Azure VMware Solution

In your Azure VMware Solution private cloud, under Manage, select Add-ons > Disaster recovery and click on “Get Started”

From the Disaster Recovery Solution drop-down, select VMware Site Recovery Manager (SRM) and provide the License key, select agree with terms and conditions, and then select Install

After the SRM appliance installs successfully, you’ll need to install the vSphere Replication appliances. Each replication server accommodates up to 200 protected VMs. Scale in or scale out as per your needs.

Move the vSphere server slider to indicate the number of replication servers you want based on the number of VMs to be protected. Then select Install

Once installed, verify that both SRM and the vSphere Replication appliances are installed.After installing VMware SRM and vSphere Replication, you need to complete the configuration and site pairing in vCenter Server.

  1. Sign in to vCenter Server as cloudadmin@vsphere.local.
  2. Navigate to Site Recovery, check the status of both vSphere Replication and VMware SRM, and then select OPEN Site Recovery to launch the client.

Configure site pairing in vCenter Server

Before starting site pair, make sure firewall rules between VMware cloud on AWS and Azure VMware solution has been opened as described Here and Here

To start pairing select NEW SITE PAIR in the Site Recovery (SR) client in the new tab that opens.

Enter the remote site details, and then select FIND VCENTER SERVER INSTANCES and select then select Remote vCenter and click on NEXT, At this point, the client should discover the VRM and SRM appliances on both sides as services to pair.

Select the appliances to pair and then select NEXT.

Review the settings and then select FINISH. If successful, the client displays another panel for the pairing. However, if unsuccessful, an alarm will be reported.

After you’ve created the site pairing, you can now view the site pairs and other related details as well as you are ready to plan for Disaster Recovery.

Planning

Mappings allow you to specify how Site Recovery Manager maps virtual machine resources on the protected site to resources on the recovery site, You can configure site-wide mappings to map objects in the vCenter Server inventory on the protected site to corresponding objects in the vCenter Server inventory on the recovery site.

  • Network Mapping
  • IP Customization
  • Folder Mapping
  • Resource Mapping
  • Storage Policy Mapping
  • Placeholder Datastores

Creating Protection Groups

A protection group is a collection of virtual machines that the Site Recovery Manager protects together. Protection group are per SDDC configuration and needs to be created on each SDDC if VMs are replicated in bi-directionally.

Recovery Plan

A recovery plan is like an automated run book. It controls every step of the recovery process, including the order in which Site Recovery Manager powers on and powers off virtual machines, the network addresses that recovered virtual machines use, and so on. Recovery plans are flexible and customizable.

A recovery plan runs a series of steps that must be performed in a specific order for a given workflow such as a planned migration or re-protection. You cannot change the order or purpose of the steps, but you can insert your own steps that display messages and run commands.

A recovery plan includes one or more protection groups. Conversely, you can include a protection group in more than one recovery plan. For example, you can create one recovery plan to handle a planned migration of services from the protected site to the recovery site for the whole SDDC and another set of plans per individual departments. Thus, having multiple recovery plans referencing one protection group allows you to decide how to perform recovery.

Steps to add a VM for replication:

there are multiple ways, i am explaining here one:

  • Choose VM and right click on it and select All Site Recovery actions and click on Configure Replication
  • Choose Target site and replication server to handle replication
  • VM validation happens and then choose Target datastore
  • under Replication setting , choose RPO, point in time instances etc..
  • Choose protection group to which you want to add this VM and check summary and click Finish

Cross-cloud disaster recovery ensures one of the most secure and reliable solutions for service availability, reason cross-cloud disaster recovery is often the best route for businesses is that it provides IT resilience and business continuity. This continuity is of most important when considering how companies operate, how customers and clients rely on them for continuous service and when looking at your company’s critical data, which you do not want to be exposed or compromised.

Frankly speaking IT disasters happen and happens everywhere including public clouds and much more frequently than you might think. When they occur, they present stressful situations which require fast action. Even with a strategic method for addressing these occurrences in place, it can seem to spin out of control. Even when posed with these situations, IT leaders must keep face, remain calm and be able to fully rely on the system they have in place or partner they are working with for disaster recovery measures.

Customer/Partner with VMware Cloud on AWS and Azure VMware Solution can build cross cloud disaster recovery solution to simplify disaster recovery with the only VMware-integrated solution that runs on any cloud. VMware Site Recovery Manager (SRM) provides policy-based management, minimizes downtime in case of disasters via automated orchestration, and enables non-disruptive testing of your disaster recovery plans.

Tanzu Service on VMware Cloud on AWS – Kubernetes Cluster Operations

Featured

Tanzu Kubernetes Grid is a managed service offered by VMware Cloud on AWS. Activate Tanzu Kubernetes Grid in one or more SDDC clusters to configure Tanzu support in the SDDC vCenter Server.In my previous post (Getting Started with Tanzu Service on VMware Cloud on AWS),in this i walked you through how to enable Tanzu Service on VMware Cloud on AWS.

In this post i will deploy Tanzu Kubernetes Cluster by GUI (from Tanzu Mission Control) and as well as CLI but this CLI is updated API V2 version, so lets get started.

Deploy Tanzu Kubernetes Cluster using Tanzu Mission Control

Go to Tanzu Mission Control and validate that VMC supervisor cluster is registered and healthy by going to Tanzu Mission Control, Click on Administration, to go “management cluster” and check the status

Now on Tanzu Mission Control, click on “Clusters” and then click on “CREATE CLUSTER”

Select your VMC Tanzu Management Cluster and click on “CONTINUE TO CREATE CLUSTER”

on the next screen choose “Provisioner” (namespace name”). you add a provisioner by creating a vSphere namespace in the Supervisor Cluster, which you can do in VMC vCenter.

Next is select Kubernetes Version, latest supported version is preselected for you, Pod CIDR, and Service CIDR. You can also optionally select the default storage class for the cluster and allowed storage classes.The list of storage classes that you can choose from is taken from your vSphere namespace.

Select the type of cluster you want to create. the primary difference between the two is that the highly available cluster is deployed with multiple control plane nodes.

You can optionally select a different instance type for the cluster’s control plane node and its storage class as well as you can optionally additional storage volumes for your control plane.

To configure additional volumes, click Add Volume and then specify the name, mount path, and capacity for the volume. To add another, click Add Volume again.

Next is you can define the default node pool and create additional node pools for your cluster. specify the number of worker nodes to provision also select the instance type for workload clusters and select the storage class

When you ready to provision the new cluster, click Create Cluster and wait for few minutes

you can also view vCenter activities about creation of Tanzu Kubernetes cluster.

Once the cluster is fully created and TMC agent reported back, you should see below status on TMC console, which shows that cluster has been successfully created.

This complates Tanzu Kubernetes Cluster deployment using GUI.

Deploy Tanzu Kubernetes Grid Service using v1alpha2 API yaml

The Tanzu Kubernetes Grid Service v1alpha2 API provides a robust set of enhancements for provisioning Tanzu Kubernetes clusters. there is an YAML specification which i am using for provisioning a Tanzu Kubernetes Cluster Using the Tanzu Kubernetes Grid Service v1alpha2 API

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgsv2
  namespace: wwmca
spec:
  topology:
    controlPlane:
      replicas: 1
      vmClass: guaranteed-medium
      storageClass: vmc-workload-storage-policy-cluster-1
      volumes:
        - name: etcd
          mountPath: /var/lib/etcd
          capacity:
            storage: 4Gi
      tkr:  
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
    nodePools:
    - name: worker-nodepool-a1
      replicas: 2
      vmClass: best-effort-large
      storageClass: vmc-workload-storage-policy-cluster-1
      tkr:  
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
  settings:
    storage:
      defaultClass: vmc-workload-storage-policy-cluster-1
    network:
      cni:
        name: antrea
      services:
        cidrBlocks: ["198.53.100.0/16"]
      pods:
        cidrBlocks: ["192.0.5.0/16"]
      serviceDomain: managedcluster.local
      trust:
        additionalTrustedCAs:
          - name: CompanyInternalCA-1
            data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlG

Two key parameters which i am using for cluster provistioning

  • #tkr.reference.name is the TKR NAME #to be used by control plane nodes; supported format is “v1.21.2—vmware.1-tkg.1.ee25d55”
  • #trust configures additional certificates for the cluster #if omitted no additional certificate is configured

You can run below command to check the status of cluster provustioning:

#kubectl get tkc

Scale a Tanzu Kubernetes cluster

Publish the service Internally/Externally

Before we can make our service available over the Internet, it should be accessible from within the VMware Cloud on AWS instance. Platform operators can publish applications through a Kubernetes Service of type LoadBalancer. This ability is made possible through the NSX-T Container Plugin (NCP) functionality built into Tanzu Kubernetes Grid. lets deploy a basic container and exposed it as type “LoadBalancer”

#kubectl run nginx1 --image=nginx
#kubectl expose pod nginx1 --type=LoadBalancer --port=80

Now you can access the application internally by accessing internal

Access application from Internet

To make it publicly available, we must assign a public IP address, and configure a Destination NAT, let do it request an Public IP on VMC console and create a NAT rule on Internet Tab to access the application from internet.

Now access the application from Internet and you should be able to successfully access it using provided public ip.

Exposing a Kubernetes service to the Internet takes a couple of more steps to complete than exposing it to your internal networks, but the VMware Cloud Console makes those steps simple enough. After exposing the Kubernetes service using an NSX-T Load Balancer, you can request a new Public IP Address and then configure a NAT rule to send that traffic to the virtual IP address of the load balancer.

Run Data Platform in Minutes on VMware Cloud Director

Featured

Enterprise applications increasingly rely on large amounts of data, that needs be distributed, processed, and stored. Open source and commercial supported software stacks are available to implement a data platform, that can offer common data management services, accelerating the development and deployment of data hungry business applications, VMware has made it simple for cloud providers to offer, deploy and manage using Data Platform Blueprint.

Understand Validated Blueprint and Requirement for Data Platform

You can find validated blueprint designs in the Bitnami Application Catalog and VMware Marketplace, including blueprints for building containerized data platforms with Kafka, Apache Spark, Solr, and Elasticsearch.

These engineered and tested data platform blueprints are implemented via Helm charts. They capture security and resource settings, affinity placement parameters, and observability endpoint configurations for data software runtimes. Using the Helm CLI or KubeApps tool, Helm charts enable the single-step, production-ready deployment of a data platform in a Kubernetes cluster, covering automated installation and the configuration of multiple containerized data software runtimes.

Each data platform blueprint comes with Kubernetes cluster node and resource configuration guidelines to ensure the optimized sizing and utilization of underlying Kubernetes cluster compute, memory, and storage resources. For example, README.md covers the Kubernetes deployment guidelines for the Kafka, Apache, Spark, and Solr blueprint.

This Blueprint enables the fully automated Kubernetes deployment of such multi-stack data platform, covering the following software components:

  • Apache Kafka – Data distribution bus with buffering capabilities
  • Apache Spark – In-memory data analytics
  • Solr – Data persistence and search
  • Data Platform Signature State Controller – Kubernetes controller that emits data platform health and state metrics in Prometheus format.

These containerized stateful software stacks are deployed in multi-node cluster configurations, which is defined by the Helm chart blueprint for this data platform deployment, covering:

  • Pod placement rules – Affinity rules to ensure placement diversity to prevent single point of failures and optimize load distribution
  • Pod resource sizing rules – Optimized Pod and JVM sizing settings for optimal performance and efficient resource usage
  • Default settings to ensure Pod access security

Cloud Director Provider Configuration

Install and Configure VMware Cloud Director App Launchpad

App Launchpad is a VMware Cloud Director service extension which service providers can use to create and publish catalogs of deployment-ready applications. Tenant users can then deploy the applications with a single click, for App Launch Pad Install and Configure see Here , once App Launchpad is installed, configure it with Bitnami helm repository as below:

  • Log in to the VMware Cloud Director service provider admin portal.
  • From the main menu (), select App Launchpad
  • On the Settings tab, click on Helm Chart Repository
  • Click Add.
  • Add the required repository details.

Add DataPlatform Blueprint from Helm Chart Repository

  1. Log in to the VMware Cloud Director service provider admin portal.
  2. From the main menu (), select App Launchpad.
  3. On the Applications tab, click Add New.
  4. Select Chart Repository as the application source.
  5. Select the chart repository from which you want to import applications and click Next.
  6. Select the application and application version that you want to add and click Next.You can add multiple applications at once.
  7. Select an existing VMware Cloud Director catalog to which you add the application or create one, and click Next.
  8. Review the applications details and click Add.

Tenant Self-Service Deployment

Once Provider has published Data Platfrom blue prints to tenants, tenants can deploy those on Tanzu Kubernetes Cluster in self service way. so before deploying tenant must need to:

  • Create Tanzu Kubernetes Cluster with enough CPU and Memory to master and worker nodes, for this blog i created a four node worker cluster with 4 vCPU and 16 GB Memory.

Below are the minimum Kubernetes Cluster requirements for “Small” size data platform:

Data Platform SizeKubernetes Cluster SizeUsage
Small1 Master Node (2 CPU, 4Gi Memory)
3 Worker Nodes (4 CPU, 32Gi Memory)
Data and application evaluation, development, and functional testing
  • Create Default Storage Class – Once Tanzu Kubernetes Cluster created, create an default stoage class for Tanzu Kubernetes cluster using below sample yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  <ensure true>
  name: vcd-disk-dev
provisioner: named-disk.csi.cloud-director.vmware.com
reclaimPolicy: Delete
parameters:
  storageProfile: "Tanzu01"  <your org VDC stroage policy>
  filesystem: "ext4"
  • Tenant Deploys Data Platform BluePrint – Now Tenant goes ahead in Cloud Director App launchpad and deploys Data Platform Blueprint using their choice of settings or with default settings
  1. Select DataPlatform Blue Print and click on Deploy
  2. Enter Application Name
  3. Select Tanzu Kubernetes Cluster on which tenant want to install data platform
  4. Click on “Launch Application”
  • This blue print bootstraps Data Platform Blueprint-1 deployment on a Kubernetes cluster using the Helm package manager.Once the chart is installed, the deployed data platform cluster comprises of:
    • Zookeeper with 3 nodes to be used for both Kafka and Solr
    • Kafka with 3 nodes using the zookeeper deployed above
    • Solr with 2 nodes using the zookeeper deployed above
    • Spark with 1 Master and 2 worker nodes
    • Data Platform Metrics emitter and Prometheus exporter

this process will also create required persistent volumes for the application, you can view the persistent volumes inside cloud director console, by going in to Tanzu Kubernetes cluster

or by going in to Organization VDC and click on “Named Disks”

The entire process takes some time, once done tenant should see all the pods are up and running, all the required volumes are created and attached and all the required services are exposed.

Testing the Kafka Cluster

(I am not Kafka expert took testing guidance from Internet, specially Platform9 website)

We are going to deploy a test client that will execute scripts against the Kafka cluster.Create and apply the following deployment:


$ vi testclient.yaml

apiVersion: v1
kind: Pod
metadata:
  name: testclient
  namespace: kafka
spec:
  containers:
  - name: kafka
    image: solsson/kafka:0.11.0.0
    command:
      - sh
      - -c
      - "exec tail -f /dev/null"

$ kubectl apply -f testclient.yaml

now lets use this “testclient" container, we will create the first topic, which we are going to use to post messages:

$ kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper dp02-zookeeper:2181 --topic messages --create --partitions 1 --replication-factor 1

make sure you use the correct hostname for zookeeper cluster and the topic configuration. now lets verify that topic exists by using below command:

$  kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper dp02-zookeeper:2181 --list

Now we can create one consumer and one producer instance so that we can send and consume messages. Open two putty shells and on first shell create consumer:

$ kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-console-consumer.sh --bootstrap-server dp02-kafka:9092 --topic messages --from-beginning

on second shell, create producer and start sending messages:

$kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list dp02-kafka:9092 --topic messages

>Hi
>How are you ?

On consumer shell, you should see these messages getting populated using data streaming platform.

Cloud Director with Container Service Extention along with App Launchpad offers easist way for providers to offer many monitizable services in multi-tenant environment and easiest way to deploy and consume these services for tenants. so providers what are you waiting for ?

Windows Bare Metal Servers on NSX-T overlay Networks

Featured

In this post, I will configure Windows 2016/2019 bare metal server as an transport node in NSX-T and then also will configure a NSX-T overlay segment on a Windows 2016/2019 server bare metal server, which allow VM and bare metal server on the same network to communicate.

To use NSX-T Data Center on a windows physical server (Bare Metal server), let’s first understand few terminologies which we will use in this post.

  • Application – represents the actual application running on the physical server server, such as a web server or a data base server.
  • Application Interface – represents the network interface card (NIC) which the application uses for sending and receiving traffic. One application interface per physical server server is supported.
  • Management Interface – represents the NIC which manages the physical server server.
  • VIF – the peer of the application interface which is attached to the logical switch. This is similar to a VM vNIC.

Now lets configure our windows server to operate in NSX overlay environment:

Enable WinRM service on Windows 2019

First of all we need to enable Windows Remote Management (WinRM) on Windows Server 2016/2019 to allow the Windows server to interoperate with third-party software and hardware. To enable the WinRM service with a self-signed certificate, Run:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
PS$ wget -o ConfigureWinRMService.ps1 https://raw.github.com/vmware/bare-metal-server-integration-with-nsxt/blob/master/bms-ansible-nsx/windows/ConfigureWinRMService.ps1
PS$ powershell.exe -ExecutionPolicy ByPass -File ConfigureWinRMService.ps1.

Run the following command to verify the configuration of WinRM listeners:

winrm e winrm/config/listener

NOTE- For production bare metal servers, please enable winrm with HTTPS for security reasons and procedure is explained here

Installing NSX-T Kernel Module on Windows 2019 Server

Now let’s proceed with installing the NSX kernel module on the Windows Server 2016/2019 bare metal server. Make sure to download NSX kernel module for Windows server 2016/2019 with the same version of your NSX-T instance from VMware downloads

Start the installation of the NSX kernel module by executing the .exe file on your Windows BM server.

Configure the bare metal server as a transport node in NSX-T

Before we add the bare metal server as a transport node, we must need to create a new uplink profile in NSX-T that we are going to use for the bare metal servers. An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

In my Lab the windows 2016/2019 bare metal server will have two network adapters, one NIC in the management VLAN and the other one is on a TEP VLAN (VLAN160).

once uplink profile is configured, We can now proceed with adding the Windows 2016/2019 bare metal server as transport node into NSX-T. In the NSX-T web page go to System –> Fabric –> Nodes and click on +ADD

Enter Management Interface IP address of your windows bare metal host and its credential, and do not change the Installation Location, it will validate your credentials against windows BM and then will allow you to move next

On the next screen, choose virtual switch name or leave it default, select overlay transport zone as we are connecting this to overlay and select uplink profile and management uplink interface.

on the next screen, configure IP address, Subnet and GW for TEP interface, this could be using specifying static IP list or choosing an IP pool which belongs to TEP VLAN.

Click on Next , This will start preparing your Winodws BM for NSX-T

​Once preparation/config completed, we can attach segment from above screen or we can Continue Later, lets click on “Continue Later” for now, we will add in different step.

Now if you see your windows BM in NSX-T console, it is ready for NSX-T and asking us to attach an overlay segment.

Attach Overlay Segement

Select host in the “Host Transport Nodes” section and click on “Action” and then click on “Manage Segment” which takes you to same screen that SELECT SEGMENT would have during original deployment

now select which segment the Application Interface for the Physical Server will reside on and click on “ADD SEGMENT PORT”

​Add Segment Port and Attach Application Interface

On the add Segment port screen:

Choose Assign New IP (This will be your application IP on Windows BM) – > NSX Interface Name (Default is “nsx-eth”) – This is Application Interface Name on Physical Server

Default Gateway –> Provide – T0 or T1 Gateway address

IP Assignment – i am doing Static, but you can also do DHCP or IP Pool for application interface.

Save –Once save is pressed, configuration is sent to Physical Server and you can see on physical server that Application IP has been assigned to an virtual interface.

Now you can see host config in NSX-T Manager console, everything is green and showing up.

Now if you see you can reach to this Bare Metal from a VM with IP address “172.16.20.101” which is on the same segment as this physical server without doing bridging.

if you click on windows server , you can see other information and specifically the “Geneve Tunnels” between ESXi host on which VM is running and Windows BM host on which your application is running.

This completes the configuration, this gives customers/partners an opportunity to run VM and Bare Metal servers on same network and security (like micro-segmentation) can be managed from single console that is NSX-T console. i hope this helps. please share your feedback 🙂

Deploy Cloud Director orgVDC Edge API Way

Recently working with a provider who was needed some guidance on deploying orgVDC Edge API way so that their developer can use API way to automate customer orgVDC edge deployment for tenants. so here are are steps which i shared with him , sharing here too..lets get started..

First thing is we need to authenticate to Cloud Director, in this example, my credentials is the equivalent of cloud director System Administrator and I am connecting to provider session using Open API cloud director API and choose Authorization as basic with provider credential.

let’s do the post API call for getting an API session token.

POST - https://<vcdurl>/cloudapi/1.0.0/sessions/provider

Once you have authenticated, click on the Headers tab and you will see your Authorization key for this session. the Key “X-VMWARE-VCLOUD-ACCESS-TOKEN”, we will use this key to authenticate further API sessions using “Bearer” token

Next API Call is to get the current Edge gateways deployed by using above “Bearer Token”. this for me was the easiest way to reference required objects for next API call. (i deployed a sample edge in Org to get references)

Result of above API call , will give you lots of information that will be used in our next API call to create an Edge:

Now Lets create orgVDC edge by creating body for POST API Call, From above result take few mandatory filed values like:

  • uplinkID
  • Subnets/gateway/IP address will be as per your environment
  • orgVDC and Its ID
  • OwnerRef , name and its id
  • OrgRef, name and its id

Here is reference payload of my API call for orgVDCEdge creation

{
            "name": "avnish2",
            "description": "test avnish2",
            "edgeGatewayUplinks": [
                {
                    "uplinkId": "urn:vcloud:network:e0d2cd31-3034-489c-b318-cd36efe89729",
                    "subnets": {
                        "values": [
                            {
                                "gateway": "172.16.43.1",
                                "prefixLength": 24,
                                "ipRanges": {
                                    "values": [
                                        {
                                          "startAddress": "172.16.43.41",
                                          "endAddress": "172.16.43.50"
                                        }
                                    ]
                                },
                                "enabled": true,
                                "primaryIp": "172.16.43.2"
                            }
                        ]
                    },
                    "connected": true,
                    "quickAddAllocatedIpCount": null,
                    "dedicated": false,
                    "vrfLiteBacked": false
                }
            ],
            "orgVdc": {
                "name": "avnish-ovdc",
                "id": "urn:vcloud:vdc:45202398-51ee-4887-886a-1eecb9fdaa1c"
            },
            "ownerRef": {
                "name": "avnish-ovdc",
                "id": "urn:vcloud:vdc:45202398-51ee-4887-886a-1eecb9fdaa1c"
            },
            "orgRef": {
                "name": "Avnish",
                "id": "urn:vcloud:org:ddf6d02f-1552-49ac-8c25-30c36f63d5b6"
            }

        }

Do the Post to https://<vcdip>/cloudapi/1.0.0/edgeGateways

you should see “accepted” response , which confirms that Cloud Director will deploy OrgVDC edge with the information that you provided in the above post API call.

Once you got “accepted” response , if you go to Cloud Director , you should see above call should have deployed “OrgVDC Edge” for a specific organisation.

I hope this helps in case if you want to use API way to provision orgVDC Edge for a tenant.

SSL VPN with Cloud Director

Secure remote access to the cloud services is essential to cloud adoption and use. Cloud Director based cloud allows every tenant to use a dedicated edge gateway, providing a simple and easy-to-use solution that supports IPsec site-to-site virtual private networks (VPNs) backed by VMware NSX-T. Since NSX-T today does not support SSL VPN and this limitation requires providers or tenants to select alternative solutions, including open source or commercial, depending on the desired mix of features and support. Example of such solutions are OpenVPN or Wireguard.

In this blog post we will deploy openVPN as a tenant admin to allow access cloud resources from Cloud using VPN Client.

Create a new VDC network

Lets create a new routed Org VDC Network and we will deploy OpenVPN on this network, you can also deploy it on existing routed network.

  1. In Cloud Director go to Networking Section and Click on New to create a new router Network
  2. Select the appropriate Org VDC
  3. Select Type of Network as “Routed”
  4. Choose appropriate Edge for this routed network association
  5. Give the network an Gateway CIDR
  6. Create a pool of IPs for Network to allocate
  7. check the summary and click finish to create a network a routed netwrok

Creating NAT Rules

After creating routed network, go on to Edge gateway and open the edge gateway configuration:

Create two NAT rules for OpenVPN appliance:

  1. SNAT rule to allow OpenVPN appliance out bound Internet access
    1. OpenVPN Appliance IP to one External IP
  2. DNAT rule to allow OpenVPN appliance Inbound access from Internet
    1. External IP to OpenVPN appliance IP

You might have to open certain firewall rules to access OpenVPN admin console which depend on from where you are accessing the console.

NAT for Cloud Director Services

Since Cloud Director Service is managed service and its architecture is different then cloud providers environment, so for CDS, we need to follow few extra steps as explained below:

Deploy OpenVPN Appliance

  1. I have downloaded the latest OpenVPN appliance from Here: -https://openvpn.net/downloads/openvpn-as-latest-vmware.ova
  2. I have uploaded in to a catalog , Select from Catalog and Click on Create vAPP

Provide a Name and Description

Select the Org VDC

Edit VM configuration

This is very important step , make sure you choose:

  1. Switch to the advance networking workflow
  2. Change IP assignment to “Manual
  3. assign a valid IP Manually from the range which we had created during network creation, if you are not putting IP here then on appliance you need to struggle for IP assignment etc..

Review and finish, This will deploy the OpenVPN appliance, once deployed power on the appliance. Now time to configure the appliance…

OpenVPN Initial Configuration

In VMware Cloud Director go to Compute and click on Virtual Machines and open the console of your OpenVPN virtual machine.

Log in to the VM as the root user and for Password – Click on “Guest OS Customization” and click on Edit

Copy the password which is in the section “Specify password“, and use this password to login to OpenVPN virtual machine

After login, you will be prompted to answer few questions:

Licence Agreement:  as usual , you do not have choice, Enter “yes”

Will this be the primary Access Server node?: Enter “Yes”

Please specify the network interface and IP address to be used by the Admin Web UI:  If the guest customizations were applied correctly, this should default to eth0, which should be configured with an IP address on the network you selected during deployment.

Please specify the port number for the Admin Web UI: Enter your desired port number, or accept the default of 943.

Please specify the TCP port number for the OpenVPN Daemon: Use default “443”

Should client traffic be routed by default through VPN?: Choose “No”.

if you enter “yes”, it will not allow client device from accessing any other networks while the VPN is connected.

Should client DNS traffic be routed by default through the VPN?: Choose “No” if your above answer is “No”.

Use local authentication via internal DB? : Enter Yes or No based on your choice of authentication.

Should private subnets be accessible to clients by default? : Enter “yes” this will enable your cloud networks accessible via the VPN.

Do you wish to login to the Admin UI as “openvpn”?: Answer “yes” which will create a local user account named as “openvpn” If you answer no, you’ll need to set up a different user name and password.

Please specify your Activation key: If you’ve purchased a licence, enter the licence key, otherwise leave this blank.

If using default “openvpn” account, after installing the OpenVPN Access Server for the first time, you are required to set a password for the “openvpn” user at the command line interface with the command “#passwd openvpn” and then use that to login at the Admin UI. This completes the deployment of appliances, now you can browse the config page, on which we will configure SSL VPN specific Options:

This section shows you whether the VPN Server is currently ON or OFF. Based on the current status, you can either Start the Server or Stop the Server with the button you see there.

Inside VPN Network settings , specify network settings which applies to your configuraiton.

Create users or configure Other authentication methods, i am creating a sample user to access cloud resources based on the permissions.

Tenant User Access

Tenant user access public IP of SSL VPN that we had assigned initially and login with credentials

Once Login , User has choice to download the VPN Client as well as Connection profiles , these connection profiles will have login information for the user.

Incase if user sees private IP in profile, (@192.168.10.101), then click on pen icon to edit the profile

Once user has edit the profile, he can successfully connect to Cloud.

While Label OpenVPN

In case provider wants to while label the OpenVPN, he can easily do this by following the simple procedure:

  1. Copy an Logo to OpenVPN appliance and edit file – nano /usr/local/openvpn_as/etc/as.conf
  2. Add below line after #sa.company_name line for company logo
    1. sa.logo_image_file=/usr/local/openvpn_as/companylogo.png
  3. uncomment sa.company_name and change it to your specific text desired for Company Name for changing company name
  4. To hide footer, below the sa.company_name and/or sa.logo_image_file variables, add the following:cs.footer=hide
  5. Save and exit the file, then restart the OpenVPN Access Server:service openvpnas restart

This completes the installation/configuration and white Labelling of the OpenVPN and these configure steps applies to VMware Cloud Director and Cloud Director service’s tenant portal on VMware Cloud on AWS, Please share feedback if any.

Tanzu Basic – Building TKG Cluster

Featured

In Continuation to our Tanzu Basic deployment series , this is the last part and by now we have our vSphere with Tanzu cluster enabled and deployed, now the next step would be to create Tanzu Kubernetes Clusters. In case if you missed previous posts , here they are:

  1. Getting Started with Tanzu Basic
  2. Tanzu Basic – Enable Workload Management

Create a new namespace

vSphere Namespaces is kind of a resource pool or a container that i can give to a project, team or customer a “Kubernetes+VM environment” where they can create and manage their application containers and virtual machines. They can’t see the other’s environment and they can’t expand past their limits set by Administrators. The vSphere Namespace construct allows the vSphere admin to set several policies in one place. The user/team/customer/project can create new workloads to their desire within their vSphere Namespace. You also set resources limits to the namespace and permissions so that DevOps engineers can access it. Let’s create our first name space by going to vCenter Menu and Click on “Workload Management”

Once you are in “Workload Management” place , click on “CREATE NAMESPACE”

Select the vSphere Cluster on which you enabled “workload management”

  1. Give DNS compliant name of the namespace
  2. Select Network for the namespace

Now we have successfully created “namespace” named “tenant1-namespace”

Next step is to Add Storage. here we need to choose a vCenter Storage policy which TKG will use to provisition control plane VMs as well as this policy will show up as a Kubernetes Storage Class for this namespace.The persistent volume claims that correspond to persistent volumes can originate from the Tanzu Kubernetes cluster.

After you assign the storage policy, vSphere with Tanzu creates a matching Kubernetes storage class in the Namespace. For the VMware Tanzu Kubernetes Clusters, the storage class is automatically replicated from the namespace to the Kubernetes cluster. When you assign multiple storage policies to the namespace, a separate storage class is created for each storage policy.

Access Namespace

Share the Kubernetes Control Plane URL with DevOps engineers as well as the user name they can use to log in to the namespace through the Kubernetes CLI Tools for vSphere. You can grant access to more than one namespace to a DevOps engineer.

Developer browse the URL and downloads TKG CLI plugin for their environment (Windows, Linux or MAC)

To provision Tanzu Kubernetes clusters by using the Tanzu Kubernetes Grid Service, we connect to the Supervisor Cluster by using the vSphere Plugin for kubectl which we downloaded from above step and authenticate with your vCenter Single Sign-On credentials, which was given by vSphere admin to developer.

After you log in to the Supervisor Cluster, the vSphere Plugin for kubectl generates the context for the cluster. In Kubernetes, a configuration context contains a cluster, a namespace, and a user. You can view the cluster context in the file .kube/config. This file is commonly called the kubeconfig file.

I am switching to “tenant1-namespace” context as i have access to multiple name spaces , similarly devops user can switch to context by following command.

Below commands to explore and help you to find out right VM type for Kubernetes Clusters sizing:

#kubectl get sc

This command will list down all the storage classes

#kubectl get virtualmachineimages

This command will list down all the VM images available for creating TKG clusters, this will help you decide the Kubernetes version that you want to use

#kubectl get virtualmachineclasses

This command will list all the machine classes (T-Shirt sizes) available for TKG clusters

Deploy a TKG Cluster

To deploy a TKG cluster we need to create a YAML file with the required configuration parameters to define the cluster.

  1. Above YAML provisions a cluster with a three control plane nodes and three worker nodes.
  2. The apiVersion and kind parameter values are constants.
  3. The Kubernetes version, listed as v1.18, is resolved to the most recent distribution matching that minor version.
  4. The VM class best-effort-<size> has no reservations. For more information, see Virtual Machine Class Types for Tanzu Kubernetes Clusters.

Once file is ready, lets provision the Tanzu Kubernetes cluster using the following kubectl command:

Monitor cluster provisioning using the vSphere Client , TKG management plane creating Kubernetes cluster automatically

Verify cluster provisioning using the following kubectl commands.

you can continue to monitor/verify cluster provisioning using the #kubectl describe tanzukubernetescluster command, at the last of the command output , it shows:

Node Status – It shows nodes status from Kubernetes prospective

VM Status – It shows nodes status from vCenter prospective

After around 15/20 minutes, you should see VM & Node Status as ready and it will also show the Phase as Running.This completes deployment of Kubernetes cluster on vSphere7 with Tanzu and we successfully deployed a Kubernetes cluster, now lets deploy a application and expose to external world.

Deploy an Application

To deploy your first application we need to login to new cluster that we created , you can use below command”

#kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME --tanzu-kubernetes-cluster-name CLUSTER-NAME --tanzu-kubernetes-cluster-namespace NAMESPACE

Once login completed, we can deploy application workloads to Tanzu Kubernetes clusters using pods, services, persistent volumes, and higher-level resources such as Deployments and Replica Sets. lets deploy an application using below comand:

#kubectl run --restart=Never --image=gcr.io/kuar-demo/kuard-amd64:blue kuard

Command now has successfully deployed application, lets expose so that we can access it using VMware HA proxy Load Balancer:

#kubectl expose pod kuard --type=LoadBalancer --name=kuard --port=8080 

Application exposed successfully, lets get the public IP which has been assigned to application by the above command , so here is the external IP – 192.168.117.35

Let’s access the application using the IP assigned to application and see if we can easily access the application.

Get Visibility of Cluster using Octant

Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.Installation is pretty simple and detailed Here

This completes the series with the installation of TKG Kubernetes cluster and run an application on top of it and accessing that application using HA proxy. !!Please share your feedback if any!!

Tanzu Basic – Enable Workload Management

Featured

In continuation to last post where we had deployed VMware HA proxy, now we will enable a vSphere cluster for Workload Management, by configuring it as a Supervisor Cluster.

Part-1- Getting Started with Tanzu Basic – Part1

What is Workload Management

With Workload Management we can deploy and operate the compute, networking, and storage infrastructure for vSphere with Kubernetes. vSphere with Kubernetes transforms vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Kubernetes provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools

Since we selected creating a Supervisor Cluster with the vSphere networking stack in previous post that means vSphere Native Pods will not be available but we can create Tanzu Kubernetes clusters.

Pre-Requisite

As per our HA proxy deployment , we chosen HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Below are the pre-requisite to enable Workload Management

  • DRS and HA should be enabled on the vSphere cluster, and ensure DRS is in the fully automated mode.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Storage Policy: Create a storage policy for the placement of Kubernetes control plane VMs.
    • I have created policy two policies named “basic” & “TanzuBasic”
    • NOTE: You should created policy with lower case policy name
    • This policy has been created with Tag based placement rules
  • Content Library: Create a subscribed content library using URL: https://wp-content.vmware.com/v2/latest/lib.json on the vCenter Server to download VM image that is used for creating nodes of Tanzu Kubernetes clusters. The library will contain the latest distributions of Kubernetes.
  • Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for Workload Networks

Deploy Workload Management

With the release of vSphere 7 update 1 a free trial of Tanzu is available for 60 day evaluation . Enter your details to receive communication from VMware and get started with Tanzu

 Next screen takes you to choose networking options available with vCenter, make sure

  • You choose correct vCenter
  • For networking there are two networking stack, since we haven’t installed NSX-T it will be greyed out and unavailable, choose “vCenter Server Network” and move to “Next”

On next screen you will be presented with vSphere Clusters which are compatible for Tanzu, incase you don’t see any cluster, go on to “Incompatible” section and click on cluster which will give you guidance for the reason of incompatible, go back and fix the reason and try again

Select the size of the resource allocation you need for the Control Plane. For the evaluation, Tiny or Small should be enough and click on Next.

Storage: Select the storage policy which we created as per of pre-requisite and click on Next

Load Balancer: This section is very important and we need to ensure that we provide correct values:

  • Enter a DNS-compliant, don’t use “under-score” in the name
  • Select the type of Load Balancer: “HA Proxy”
  • Enter the Management data plane IP Address. This is our management ip and port number assigned to VMware HA proxy management interface.In our case it is 192.168.115.10:5556.
  • Enter the username and password used during deployment for the HA Proxy
  • Enter the IP Address Ranges for Virtual Server. we need to provide the IP ranges for virtual servers, these are the ip-address which we had defined in the frontend network. It’s the exact same range which we used during deployment of HA-proxy configuration process, but this time we will have to write full range instead of using a CIDR format, in this case i am using: 192.168.117.33-192.168.117.62
  • Finally, enter in the Server CA cert. If you have added a cert during deployment, you would use that. If you have used a self-signed cert then you can retrieve that data from the VM by browsing /etc/haproxy/ca.crt.

Management Network: Next portion is to configure IP address for Tanzu supervisor control plane VM’s, this will be from management IP range.

  • We will need 5 consecutive IPs free from Management IP range, Starting IP Address this is the first IP in a range of five IPs to assign to Supervisor control plane VMs’ management network interfaces.
  • One IP is assigned to each of the three Supervisor control plane VMs in the cluster
  • One IP is used for a Floating IP which we will use to connect to Management plane
  • One IP is reserved for use during upgrade process
  • This will on mgmt port group

Workload Network:

Service IP Address: we can take the default network subnet for “IP Address for Services. change this if you are using this subnet anywhere else. This subnet is for internal communication and it not routed.

And the last network in which we will define the Kubernetes node IP range, this applies to both the supervisor cluster as well as the guest TKG clusters. This range will be from workload IP range which we had created in the last post with vLAN 116.

  • Port Group – workload
  • IP Address Range – 192.168.116.32-192.168.116.63

Finally choose the content library which we had created as a part of pre-requisite

if you have provided right information with correct configuration, it will take around 20 minutes to install and configure entire TKG management plane to consume. you might see few errors while configuring Management plane but you can ignore as those operations will be retried automatically and errors will get clear when that particular task get succeed.

NOTE-Above screenshot has different cluster name as i have taken it from different environment but IP schema is same.

I hope this article helps you to enable your first “Workload Management” vSphere cluster without NSX-T. Next Blog post i will cover deployment of TKG Clusters and others things around that…

Load Balancer as a Service with Cloud Director

Featured

NSX Advance Load Balancer’s (AVI) Intent-based Software Load Balancer provides scalable application delivery across any infrastructure. AVI provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based approach.

With the release of Cloud Director 10.2 , NSX ALB is natively integrated with Cloud Director to provider self service Load Balancing as a Service (LBaaS) where providers can release load balancing functionality to tenants and tenants consume load balancing functionality based on their requirement. In this blog post we will cover how to configure LBaaS.

Here is High Level workflow:

  1. Deploy NSX ALB Controller Cluster
  2. Configure NSX-T Cloud
  3. Discover NSX-T Inventory,Logical Segments, NSGroups (ALB does it automatically)
  4. Discover vCenter Inventory,Hosts, Clusters, Switches (ALB does it automatically)
  5. Upload SE OVA to content library (ALB does it automatically, you just need to specify name of content library)
  6. Register NSX ALB Controller, NSX-T Cloud and Service Engines to Cloud Director and Publish to tenants (Provider Controlled Configuration)
  7. Create Virtual Service,Pools and other settings (Tenant Self Service)
  8. Create/Delete SE VMs & connect to tenant network (ALB/VCD Automatically)

Deploy NSX ALB (AVI) Controller Cluster

The NSX ALB (AVI) Controller provides a single point of control and management for the cloud. The AVI Controller runs on a VM and can be managed using its web interface, CLI, or REST API but in this case Cloud Director.The AVI Controller stores and manages all policies related to services and management. To ensure AVI controllers High Availability we need to deploy 3 AVI Controller nodes to create a 3-node AVI Controller cluster.

Deployment Process is documented Here & Cluster creation Process is Here

Create NSX-T Cloud inside NSX ALB (AVI) Controller

NSX ALB (AVI) Controller which uses APIs to interface with the NSX-T manager and vCenter to discover the infrastructure.here is high level activities to configure NSX-T Cloud in NSX ALB management console:

  1. Configure NSX-T manager IP/URL (One per Cloud)
  2. Provide admin credentials
  3. Select Transport zone (One to One Mapping – One TZ per Cloud)
  4. Select Logical Segment to use as SE Management Network
  5. Configure vCenter server IP/URL (One per Cloud)
  6. Provide Login username and password
  7. Select Content Library to push SE OVA into Content Library

Service Engine Groups & Configuration

Service Engines are created within a group, which contains the definition of how the SEs should be sized, placed, and made highly available. Each cloud will have at least one SE group.

  1. SE Groups contain sizing, scaling, placement and HA properties
  2. A new SE will be created from the SE Group properties
  3. SE Group options will vary based upon the cloud type
  4. An SE is always a member of the group it was created within in this case NSX-T Cloud
  5. Each SE group is an isolation domain
  6. Apps may gracefully migrate, scale, or failover across SEs in the groups

​Service Engine High Availability:

Active/Standby

  1. VS is active on one SE, standby on another
  2. No VS scaleout support
  3. Primarily for default gateway / non-SNAT app support
  4. Fastest failover, but half of SE resources are idle

​Elastic N + M 

  1. All SEs are active
  2. N = number of SEs a new Virtual Service is scaled across
  3. M = the buffer, or number of failures the group can sustain
  4. SE failover decision determined at time of failure
  5. Session replication done after new SE is chosen
  6. Slower failover, less SE resource requirement

Elastic Active / Active 

  1. All SEs are active
  2. Virtual Services must be scaled across at least 2 Service engines
  3. Session info proactively replicated to other scaled service engines
  4. Faster failover, require more SE resources

Cloud Director Configuration

Cloud Director Configuration is two fold, Provider Config and Tenant Config, lets first cover provider Config…

Provider Configuration

Register AVI Controller: Provider administrator login as a admin and register AVI Controller with Cloud Director. provider has option to add multiple AVI controllers.

NOTE – incase if you are registering with NSX ALB’s default self sign certificate and if it throws error while registering , then regenerate self sign certificate in NSX ALB.

Register NSX-T cloud

Now next thing is we need to register NSX-T cloud with Cloud Director, which we had configured in ALB controller:

  1. Selecting one of the registered AVI Controller
  2. Provide a meaning full name to the controller
  3. Select the NSX-T cloud which we had registered in AVI
  4. Click on ADD.

Assign Service Engine groups

Now register service engine groups either “Dedicated” or “Reserved” based on tenant requirement or provider can have both type of groups and assign to tenant based on requirements.

  1. Select NSX-T Cloud which we had registered above
  2. Select the “Reservation Model”
    1. Dedicated Reservation Model:- For each tenant Organization VDC Edge gateway, AVI will create two Service Engine nodes for each LB enabled Org VDC Edge GW.
    2. Shared Reservation Model:- Shared is elastic and shared among all tenants. AVI will create pool of service engines that are going to be shared across tenant. Capacity allocation is managed in VCD, Avi elastically deploys and un-deploys service engines based on usage

Provider Enables and Allocates resources to Tenant

Provider enables LB functionality in the context of Org VCD Edge by following below steps:

  1. Click on Edges 
  2. Choose Edge on which he want to enable load balancing
  3. Go to “Load Balancer” and click on “General Settings”
  4. Click on “Edit”
  5. Toggle on to Activate to activate the load balancer
  6. Select Service Specification

Next step is to assign Service Engines to tenant based on requirement, for that go to Service Engine Group and Click on “ADD” and add one of the SE group which we had registered previously to customer’s one of the Edge.

Provider can restrict usage of Service Engines by configuring:

  1. Maximum Allowed: The maximum number of virtual services the Edge Gateway is allowed to use.
  2. Reserved: The number of guaranteed virtual services available to the Edge Gateway.

Tenant User Self Service Configuration

Pools: Pools maintain the list of servers assigned to them and perform health monitoring, load balancing, persistence.

  1. Inside General Settings some of the key settings are:
    1. Provide Name of the Pool
    2. Load Balancing Algorithm
    3. Default Server Port
    4. Persistence
    5. Health Monitor
  2. Inside Members section:
    1. Add Virtual Machine IP addresses which needs to be load balanced
    2. Define State, Port and Ratio
    3. SSL Settings allow SSL offload and Common Name Check

Virtual Services: A virtual service advertises an IP address and ports to the external world and listens for client traffic. When a virtual service receives traffic, it may be configured to:

  1. Proxy the client’s network connection.
  2. Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.
  3. Forward the client’s request data to the destination pool for load balancing.

Tenant choose Service Engine Group which provider has assigned to tenant, then choose Load Balancer Pool which we created in above step and most important Virtual IP This IP address can be from External IP range of the Org VDC or if you want Internal IP , then you can use any IP.

So in my example, i am running two virtual machines having Org VDC Internal IP addresses and VIP is from external public IP address range, so if I browse VIP , i can reach to web servers sucessfully using VCD/AVI integration.

This completes basic integration and configuration of LBaaS using Cloud Director & NSX Advance Load Balancer. feel free to share feedback.

VMware Cloud Director Two Factor Authentication with VMware Verify

In this post, I will be configuring  two-factor authentication (2FA) for VMware Cloud Director using Workspace ONE Access formally know as VMware Identity Manager (vIDM). Two-factor authentication is a mechanism that checks username and password as usual, but adds an additional security control before users are authenticated. It is a particular deployment of a more generic approach known as Multi-Factor Authentication (MFA).Throughout this post, I will be configuring VMware Verify as that second authentication.

What is VMware Verify ?

VMware Verify is built in to Workspace ONE Access (vIDM) at no additional cost, providing a 2FA solution for applications.VMware Verify can be set as a requirement on a per app basis for web or virtual apps on the Workspace ONE launcher OR to login to Workspace ONE to view your launcher in the first place. The VMware Verify app is currently available on iOS and Android.VMware Verify supports 3 methods of authentication:

  1. OneTouch approval
  2. One-time passcode via VMware Verify app (soft token)
  3. One-time passcode over SMS

By using VMware Verify, security is increased since a successful authentication does not depend only on something users know (their passwords) but also on something users have (their mobile phones), and for a successful break-in, attackers would need to steal both things from compromised users.

1. Configure VMware Verify

First you need to download and install “VMware Workspace ONE Access“, which is very simple to deploy using ova. VMware Verify is provided as-a-service, and thus, it does not require to install anything on-premise server. To enable VMware Verify, you must contact VMware support. They will provide you a security token which is all you need to enable the integration with VMware Workspace One Access (vIDM).Once you get the token, login into vIDM as an admin user and go to:

  1. Click on the Identity & Access Management tab
  2. Click on the Manage button
  3. Select Authentication Methods
  4. Click on the configure icon (pencil) next to VMware Verify
    1. undefined
  5. A new window will pop-up, on which you need to select the Enable VMware Verify checkbox, enter the security token provided by VMware support, and click on Save.
    1. undefined
    2. undefined

2. Create a Local Directory on VMware Workspace One Access

VMware Workspace ONE Access not only supports Active Directories , LDAP Directories but also supports other types of directories, such as local directories and Just-in-Time directories.For this Lab , i am going to create a local directory using local directory feature of Workspace One Access,Local users are added to a local directory on the service. we need to manage the local user attribute mapping and password policies. You can create local groups to manage resource entitlements for users.

  1. Select the Directories tab
  2. Click on “Add Directory”
    1. undefined
  3. Specify Directory and domain name (this is same domain name i have registered for VMware Verify
    1. undefined

3. Create/Configure a built-in Identity Provider

Once the second authentication factor is enabled as described on steps 1 and 2, it must next be added as an authentication method to a Workspace one access built-in provider. If in your environment already exists one, you can re-configure it. Alternatively, you can create a new built-in identity provider as explained below.Login to Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Identity Providers link
  4. Click on the Add Identity Provider button and select Create Built-in IDP
    1. undefined
  5. Enter a name describing the Identity Provider (IdP)
  6. Which users can authenticate using the IdP – In the example below I am selecting the local directory that i had created above.
  7. Network ranges from which users will be directed to the authentication mechanism described on the IdP
  8. The authentication methods to associate with this IdP – Here I am selecting VMware Verify as well as Local Directory.
  9. Finally click on the Add button
    1. undefined

4. Update Access Policies on Workspace One Access

The last configuration step on Workspace One Access (vIDM) is to update the default access policy to include the second factor authentication mechanism. For that, login into Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Policies link
  4. Click on the Edit Default Policy button
  5. This will open up new page showing the details of the default access policy. Go to “Configuration” and Click on “ALL RANGES”.

A new window will pop-up. Modify the settings right below the line “then the user may authenticate using:”

  1. Select Password as the first authentication method – This way users will have to enter their ID and password as defined on the configured Local Directory
  2. and Select second authentication mechanism. here I am adding VMware Verify – This will make that after a successful password authentication, users will get a notification on their mobile phones to accept or deny the login request.
  3. I am leaving empty the line “If preceding Authentication Method fails or is not applicable, then:” – This is because I don’t want to configure any fallback authentication mechanism or you can choose based on your choice.

5. Download the app in your mobile and register a user from an Cloud Director Organization

  1. Access the app provider on your mobile phone. Search for VMware Verify and download it.
    1. undefined
  2. Once it is downloaded, open the application. It will ask for your mobile number and e-mail address. Enter your domain details. On the screenshot below, I’m providing my mobile number and an e-mail which is only valid in my lab.After clicking OK, you will be provided two options for verifying your identity:
    1. undefined
  1. Receiving and SMS message – SMS will have registration code which will allow you to enter in to APP along with registration code.
    1. undefined
  2. Receiving a Phone Call – after clicking on this option, the app will show a registration code you will need to type on the phone pad once you receive the call
  3. Since i am using SMS way to doing it , it will ask you to Enter the code which you have received in SMS Manually (XopRcVjd4u2)
    1. undefinedundefined
  4. Once your identity has been verified, you will be asked to protect the app by setting a PIN number. After that, the app will show there are not accounts configured yet.
    1. undefinedundefined
  5. Click on Account and add the account
    1. undefinedundefined

Immediately after that, we will start receiving tokens on the VMware Verify mobile app. so at this moment, you are ready to move to the next step.

6. Enable VMware Cloud Director Federation with VMware Workspace ONE Access

There are three authentication methods that are supported by vCloud Director:

Local: local users which are created at the time of installing vCD or while creating any new organization.

LDAP service:  LDAP service enables the organisations to use their own LDAP servers for authentication. Users can then be imported into vCD from the configured LDAP.

SAML Identity Provider: A SAML Identity Provider can be used to authenticate users in organisations. SAML v2.0 metadata is required for the service to be configured. The metadata must include the location of the single sign-on service, the single logout service, and the X.509 certificate for the service. In this post we will be using federation between VMware Workspace One Access with VMware Cloud Director.

So, let’s go ahead and login to VMware Cloud Director Organization and go to “Administration” and Click on “SAML”

  1. Enable Federation by setting “Entity ID” to any other unique string , in this case i am setting “org name” , in my case my org name is “abc”
  2. Then click on “Generate” to generate a new certificate and click “SAVE”
    1. undefined
  3. Download Metadata from the link , It will download file “spring_saml_metadata.xml“. This activity can be performed by system or Org Administrator.
    1. undefined
  4. In VMware Workspace ONE Access(VIDM) admin console, go to “Catalog” and create new web application.
    1. Write application name, description and upload nice icon and choose category.
    1. undefined
  5. In the next screen keep Authentication Type SAML 2.0 and paste the xml metadata downloaded in step #1 into the URL/XML window. Scroll down to Advanced Properties. 
    1. undefined 
  6. In Advanced Properties we will keep the defaults but add Custom Attribute Mappings which describe how VIDM user attributes will translate to VCD user attributes. Here is the list:
    1. undefined
  7. Now we can finish the wizard by clicking next, select access policy (keep default) and reviewing the Summary on the next screen.
    1. undefined
  8. Next we need to retrieve metadata configuration of VIDM – this is by going back to Catalog and clicking on Settings. From SAML Metadata download Identity Provider (IdP) metadata.
    1. undefined
  9. Now we can finalize SAML configuration in vCloud Director. on Federation page Toggle Use SAML Identity Provider button to enable it and import the downloaded metadata (idp.xml) with Browse and Upload buttons and click Apply.
    1. undefined
  10.  we first need to import some users/groups to be able to use SAML. You can import VMware Workspace ONE Access(VIDM) users by their user name or group. We can also assign role to the imported user.
    1. undefined

This completes the federation process between VMware Workspace ONE Access (VIDM) and VMware Cloud Director. For More details you can refer This Blog Post.

Result – Cloud Director Two Factor Authentication in Action

Lets your tenant go to browser and browse their tenant URL, they will get atomically redirected to VMware Workspace ONE Access page for authentication:

  1. User enters user name and password and if user get successfully authenticated , if moves to 2FA
  2. on the next step, user gets a notification on thier mobile phones
    1. undefined
  3. Once user approves the authentication on the phone , VMware Workspace ONE Access allows access to user based on the role given on VMware Cloud Director.

On-Board a New User

  1. Create a new User in VMware Workspace and also add him to application access.
    1. undefined
    2. undefined
  2. User gets an email to setup his/her password, user must configure his/her password.
    1. undefined
  3. Administrator login to Cloud Director and Import newly created user from SAML with a Cloud Director role
    1. undefined
    2. undefined
  4. User browses cloud URL and after user logs in to portal with user id and password, he/she asked to provide mobile number for second factor authentication.
    1. undefinedundefined
  5. After entering mobile number , if user has installed “VMware Verify” app , he/she get notification for Approve/Deny or if app has not been installed , click on “Sign in with SMS” , user will receive an SMS , enter that SMS for second factor authentication.
    1. undefinedundefinedundefined
  6. Once user enters the passcode received on his/her cell phone, VMware Workspace One Access allow user to login to cloud director.
    1. undefined

This completes the installation and configuration of VMware Verify with VMware Cloud Director. you can add additional things like branding of your cloud etc.. which will give this your cloud identity.

vCloud Director 10 – NSX-T – Tenant Configuration

In continuation of my previous post , in this post i will be covering tenant side configuration of vCloud Director 10 along with NSX-T.

Create OrgVDC

To provide resources to an Tenant organization, you create one or more organization virtual data centers for tenant organization.To Create an OrgVDC , you need to go to “Cloud Resources” then “Organization VDCs” and Click on New:

  1. Name Tenant OrgVDC appropriately
  2. Select the Organisation
  3. Select the PVDC which is NSX-T backed.
  4. Choose appropriate allocation model (flex)
  5. Configure reservation pool related settings
  6. Choose appropriate storage policy
  7. enable “Network Pool” and select correct network pool and specify max networks
  8. Review and click on Finish.

This slideshow requires JavaScript.

Create Org Edges

To connect tenants networks created inside org vDC to out side , we need to create edge gateways, which internally automatically create T1 router, here are the steps to create edge:

  1. Login to tenant by clicking on “Open in Tenant Portal” and go to Edges & click  New
    • 29.png
  2. Name Tenant edge appropriately
  3. Select IP segment and reserve few IPs to talk to external world.
  4. Review configuration and submit

This slideshow requires JavaScript.

If you look back in to NSX-T , this will create a Tier-1 router automatically and connect it to Tier-0 router.

35.png

Org Edge supported Tenant Operation:

Currently the following T1 GW networking services are available to tenants:

  • Firewall
  • NAT
  • DHCP (without binding and relay)
  • DNS forwarding
  • IPSec VPN with API only and only apply Policy based with pre share key.

42.png

Create Org Networks

The first network to create for tenant is an organization Virtual Datacenter network, An orgVDC network allows virtual machines in the orgVDC to communicate with each other and to access other networks, including orgVDC networks and external networks, either directly or through an Edge Gateway (T0) that can provide firewall and NAT services as of now. There are three type of org Networks:

Isolated:

You can add an isolated orgVDC network, which is accessible only by this organization. This network provides no connectivity to virtual machines outside this organization. Virtual machines outside of this organization have no connectivity to the virtual machines in the organization.

Routed:

Routed network control the access to an external network, System administrators and organization administrators can configure network address translation (NAT), firewall, and VPN settings to make specific virtual machines accessible from the external network.

Imported:

You can import existing NSX-T overlay switch in to org , for this networking type all networking need to be configured and managed out side of vCloud Director.

This slideshow requires JavaScript.

Tenant VM External Access:

As i said tenant networks are not advertised , we need to create SNAT rules to allow external access:

44.png

43

NOTE – Tenant can only self service Isolated and routed networks, there are few options like DFW and Load Balancer still has not been exposed to tenants.

 

 

Upgrade NSX-T 2.3 to NSX-T 2.4

This blog is about how I upgraded my homelab from NSX-T from version 2.3 to version 2.4.First downloaded the NSX-T 2.4.x upgrade bundle (the MUB-file) from the My VMware download portal.

Checking prerequisites

before starting the upgrade , check the compatibly and ensure that the vCenter and ESXi versions are supported.

Upgrade NSX Manager Resources

In my lab the NSX-Manager VM was running with 2vCPU and 8 GB memory, with this new version minimum requirements went up to 4 vCPUS and to 16 GB of RAM.So in my case I had to shut down the NSX-T Manager and upgrade the specs to 6 vCPUs and 16 GB of memory as i was seeing 4 vCPUs were 100% getting utilised.

The Upgrade Process

Upload the MUB file to the NSX Manager1.png

After uploading the MUB file to the NSX Manager which takes some time, the file is validated and extracted which again takes little extra time. so in my Lab downloading, uploading, validating and extracting the upgrade bundle took around 40 – 50 minutes. so have patience.

Begin upgrade and accept EULA.23

After you click on “Begin Upgrade” , accept EUPA, it will throw a message , upgrade “upgrade corrdinator” , click on “Yes” and wait for some time , post “upgrade coordinator” update session will logout , login again , you will see a new upgrade interface: NOTE – Do not initiate multiple simultaneous upgrade processes for the upgrade coordinator.

4.png

Host Upgrade

Select the hosts and click on Start to start the hosts upgrade process.

5.png

While upgrade is going on continue to check you vCenter , at times ESXi host goes not go in maintenance mode due to some rule/restriction created by you , so help your ESXi server to in to maintenance mode.

6.png

7.png

Continue to monitor the progress once done successfully , move to “Next”.

Edge Upgrade

Clicking on “NEXT” will take you to Edge section , click on “Start” and continue to monitor the progress..

8.png

very simple and straight forward process , once upgrade completed , click on “Next”

9.png

Controller Upgrade

In NSX2.4 onwards , controllers has been merged in to NSX Manager ,so no need to upgrade controller , move ahead and upgrade NSX Manager.

10.png

NSX Manager Upgrade

upgrade of NSX Manager gives to two options:

  • Allow Transport Node connection after a single node cluster is formed.
  • Allow Transport Node connections only after three node cluster is formed.

choose option which is configured in your environment.

11

Accept the upgrade notification. You can safely ignore any upgrade related errors such as, HTTP service disruption that appears at this time. These errors appear because the Management plane is rebooting during the upgrading. Wait until the reboot finishes and the services are reestablished.

You can get in to CLI, log in to the NSX Manager to verify that the services have started run: #get service When the services start, the Service state appears as running. Some of the services include, SSH, install-upgrade, and manager.

Finally after around one hour in my Lab , Manager is also get updated successfully.

12.png

so this completes the upgrade process, check the health of every NSX-T component and if everything is green , you can now go ahead and shutdown and delete the NSX controller.Hope this help you in your NSX-T upgrade planning.

 

 

 

 

VMware PKS, NSX-T & Kubernetes Networking & Security explained

In continuation of my last post on VMware PKS and NSX-T explaining on getting started with VMware PKS and NSX-T (Getting Started with VMware PKS & NSX-T) , here is next one explaining around behind the scene NSX-T automation for Kubernetes by VMware NSX CNI plugin and PKS.

NSX-T address all the K8s networking functions, load balancing , IPAM , Routing and Firewalling needs, it supports complete automation and dynamic provisioning of network objects required for K8s and it’s workload and this is what i am going to uncover in this blog post.

Other features  like it has support for different topology choice for POD and NODE networks (NAT or NO-NAT) , it supports network security policies for Kubernetes , Clusters , Namespaces and individual services and it also supports network traceability/visibility using NSX-T in built operational tools for kubernetes.

I will be covering the deployment procedure of PKS after some time but just want to let explain that what happens on NSX-T side when you run “#pks create cluster” on PKS command line..and then when you create K8s Namespaces and PODs

pks create cluster

So when you run #pks create cluster with some argument , it goes to vCenter and deploys Kubernetes Master and Worker VMs based on specification you have chosen during deployment and on NSX-T side a new logical switch get created for these vms and get connected to these vms. (in this Example one K8s Master and 2 Nodes has been deployed) , along with logical switch , a Tier-1 cluster router get created which get connected to your organisation’s Tier-0 router.

K8s Master and Node Logical Switches

13.png

K8s Cluster connectivity towards Tier-0

12.png

Kubernetes Namespaces and NSX-T

Now if K8s cluster deployed successfully, Kubernetes cluster by default deploys three name space:

  • Default – The default namespace for objects with no other namespace.
  • kube-system – The namespace for objects created by the Kubernetes system.
  • kube-public – The namespace is created automatically and readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

for each default namespace, PKS automatically deploys and configures NSX-T Logical Switchs and each logical switch will have its own Tier-1 router connected to Tier-0.

14.png

pks-infrastructure Namespace and NSX-T

in the above figure you can clearly see “default”,”kube-public” and “kube-system” Logical Switches. there is another Logical Switch “pks-infrastructure” get created which is pks specific namespace and running pks related stuff like NSX-T CNI. “pks-infrastructure” is running NSX-NCP CNI plugin to integrate NSX-T with kubernetes.

15.png

kube-system Namespace & NSX-T

Typically, this runs pod like heapster , kube-dns , kubernetes-dashboard,  monitoring db , telemetry agent and stuff like ingresses and so on if you deploy so.

16.png

on NSX-T side as explained earlier a Logical switch get created for this Namespace and for each system POD a logical port get created by PKS on NSX-T.

17.png

Default Namespace & NSX-T

This is the cluster’s default namespace which is used for holding the default set of pods, services, and deployments used by the cluster. so when you deploy a POD without creating/specifying a new name space , “default Namespace” becomes default container to hold these pods and as explained earlier this also has its own NSX-T logical switch with a uplink port to Tier-1 router.

18.png

now when you deploy a Kubernetes pod without a new namespace , since that POD will be part of “Default Namespace”, PKS create a NSX-T logical port on the default logical switch.   let’s create a simple POD:

19.png

let’s go back to NSX-T’s  “Default Namespace” logical switch:

20.png

as you can see a new logical port has been created on default logical switch.

New Namespace & NSX-T

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. in simple terms Namespaces are like org vdc in vCD and Kubernetes best practice is to arrange PODs in namespaces. so when we create a new Namespace , what happens in NSX-T ?

i have created a new Namespace called “demo”.

23.png

if you observe below images left image showing default switches and right image is showing logical switches after creation of new Namespace.

and as you can see a new Logical switch has been created for new Namespace.

if you are creating PODs in default Namespace then all the pods get attached to default logical switch and if you are creating Namespace ( which is K8s best-practice) then a new logical switch get created and any POD which is getting deployed in this namespace will be part of its NSX-T logical switch and this new logical switch will also have its own Tier-1 router connecting to Tier-0 router.

24.png

Expose PODs to Outer World

in this example we deployed POD and get the internal network connectivity but internal only connectivity is not going to give access to this web server to outer world and this is default forbidden in kubernetes , so we need to expose this deployment using load balancer to the public interface on the specific port. let’s do that:

26.png

27.png

Lets browse this App using EXTERNAL-IP as you might know CLUSTER-IP is internal ip.

33.png

Kubernetes Cluster and NSX-T Load Balancer

As above when we expose this deployment using service on NSX-T there is a cluster load balancer get deployed automatically when we create cluster , on this load balancer NSX-CNI go ahead and add pod to virtual servers under a new load balancer VIP.

28.png

if we drill down to pool members of this VIP , we will see our kubernetes pod ep ips.

29.png

behind the scene when you deploy a cluster a LB logical switch and LB Tier-1 router which is having logical connectivity to the Load Balancer and Tier-0 Router , so that you can access the deployment externally.

3031

This is what your Tier-0 will look like, having connectivity to all the Tier-1 and Tier-1 is having connected to Namespace logical switches.

32.png

these all logical switchs, Tier-1 router , Tier-0 router creations , their connectivity , LB creations etc all has been done automatically by NSX-T container (CNI) plugin and PKS. i was really thrilled when i tried this first time and it was so simple , if you understood the concept.

Kubernetes and Micro-segmentation

The NSX-T container plugin helps to exposure of container “Pods”as NSX-T logical switch /ports and because of this we can easily implement micro-segmentation rules. once “Pods ” expose to the NSX ecosystem, we can use the same approach we have with Virtual Machines for implementing micro segmentation and other security measures.

3435

or you can use security groups based on tags to achieve micro segmentation.

3637

 

This is what i have tried in this post to explain what happen behind the scene on NSX-T networking stack when you deploy and expose your applications on kubernetes and how we can bring in proven NSX-T based micro-segmentation.

Enjoy learning PKS,NSX-T and Kubernetes one of the best combination for Day-1 and Day-2 operation of kubernetes 🙂 and feel free to comment and suggestions.

 

 

 

Upgrade NSX-T 2.1 to NSX-T 2.3

I am working on PKS deployment and will soon sharing my deployment procedure on PKS but before proceeding with PKS deployment, i need to upgrade my NSX-T lab environment to support latest PKS as per below compatibility matrix.

PKS Version Compatible NSX-T Versions Compatible Ops Manager Versions
v1.2 v2.2, v2.3 v2.2.2+, v2.3.1+
v1.1.6 v2.1, v2.2 v2.1.x, 2.2.x
v1.1.5 v2.1, v2.2 v2.1.x, v2.2.x
v1.1.4 v2.1 v2.1.x, 2.2.x
v1.1.3 v2.1 v2.1.0 – 2.1.6
v1.1.2 v2.1 v2.1.x, 2.2.x
v1.1.1 v2.1 – Advanced Edition v2.1.0 – 2.1.6

In this post i will be covering the procedure to upgrade NSX-T 2.1 to NSX-T 2.3.

So Before proceeding for upgrade , lets check the health of current deployment which is very important because if we start upgrading the environment and once upgrade is completed and after upgrade if some thing is not working , we will not come to know whether before upgrade it was working or not , so lets get in to validation of health and version checks.

Validate Current Version Components Health

First thing to check the Management Cluster and Controller connectivity and ensure they are up.

Next is to Validate host deployment status and connectivity.

34

Check the Edge health

5.png

Lets check the Transport Node Health

6

Upgrade Procedure

Now Download the upgrade bundle

7.png

Go to NSX Manager and browse to Upgrade

8.png

Upload the downloaded upgrade bundle file in NSX Manager

9.png

Since upgrade bundle is very big in size , it will take lots of time in upload, extraction and verification.Once the package has uploaded, click to “BEGIN UPGRADE”.

11

The upgrade coordinator will then check the install for any potential issues. In my environment there is one warnings for the Edge that the connectivity is degraded – this is because of i have disconnected 4 th nic which is safe to ignore, so when you are doing for your environment , please access all the warnings and take necessary actions before proceeding with upgrade.

12

Click Next will take you to view the Hosts Upgrade page. Here you can define the order and method of upgrade for each host, and define host groups to control the order of upgrade. I’ve gone with the defaults, serial (one at a time) upgrades over the parallel because i have two hosts in each clusters.

Click START to begin the upgrade, and the hosts will be put in maintenance mode, then upgraded and rebooted if necessary. ensure you need to have DRS enabled and the VMs on the hosts must be able to vMotion off of the host being put in maintenance mode. Once the host has upgraded, and the Management Plane Agent has reported back to the Manager, the Upgrade Coordinator will move on to the next host in the group.

13.png

Once the hosts are upgraded, click next to move to the Edge Upgrade page. Edge Clusters can be upgraded parallel if you have multiple edge clusters, but the Edges which has formed the Edge Clusters and upgraded serially to ensure connectivity is maintained. In my lab , i have a single Edge Cluster with two Edge VMs, so this will be upgraded one Edge at a time.Click on the “START” to start the edge upgrade process.

14

15

Once the Edge Cluster has been upgraded successfully, click NEXT to move to the Controller Node Upgrade Page. here you can’t change the sequence of upgrade of the controllers, controllers are done in parallel by default. (in my Lab i am running a single controller because of resource constraint but in production you will see three controllers deployed in a cluster). Click on “START” to begin the upgrade process.

16.png

Once the controller upgrade has been completed, click NEXT to move to the NSX Manager upgrade page. The NSX Manager will become unavailable for about 5 minutes after you click START and it might take 15 to 20 minutes to upgrade the manager.

17.png

Once the Manager upgrade has completed. review the upgrade cycle.

19.png

you can re-validate the installation as we did at the start of the upgrade, checking that we have all the green lights on, and the version of components have increased.

VMware HCX on IBM Cloud

I think now goal of every enterprise organization is to be able to accelerate their time to business value for their  customers by taking advantage of cloud and most of the enterprise customer data center are built on VMware platform and while considering cloud adoption , organizations are facing challenges listed as below:

  • Incompatible environments – Currently customers have various VMware vSphere versions deployed in on-premises and if these customers would like to move to the latest build of SDDC based cloud but facing trouble due to differences in versions, networking architectures, server CPUs etc.
  • Network complexity to the cloud – Internet/WAN technologies are complex, non-automated, and inconsistent. Setting up and maintaining VPNs, firewalls, direct connects and routing across network/co-location provider networks, and enterprise networks is not easy.
  • Complex application dependencies – Enterprise applications are complex and these applications interact with various other servers in the datacenter such as storage , databases, other solutions, DMZ, serious security solutions and platform applications. Application dependencies are hard to assess, customers face lots of issues in addressing these issues.

HCX enables enterprise to the over come above challenges. Delivered as a VMware Cloud Service, and available through IBM Cloud, the service enables secure, seamless interoperability and hybridity between any VMware vSphere based clouds, enabling large-scale migration to modern clouds/datacenters, and application mobility, with no application downtime. Built-in DR & security enables unified compliance, governance and control. Automated deployment makes it straightforward for service providers and enterprises to rapidly get to the desired state and shorten time to value.

1.png

HCX on IBM Cloud

HCX on IBM Cloud unlocks the IBM Cloud for on-premises vSphere environments, where you can build an abstraction layer between your on-premises data centers and the IBM Cloud. Once done, networks can be stretched securely across the HCX hybrid interconnect, which enables seamless mobility of virtual machines. HCX enables hybrid capabilities in vCenter so that workloads can be protected and migrated to and from the IBM Cloud.

HCX uses vSphere Replication to provide robust replication capabilities at the hypervisor layer , which is coupled with HCX’s ability to stretch layer 2 network, no intrusion change at the OS or application layers is required. This allows VMs to be easily migrated to and from the IBM Cloud without any change. HCX also optimizes migration traffic with inbuilt WAN optimization.

The HCX solution requires at least one VCF V2.1 or vCenter Server instance running vSphere 6.5 with the HCX on IBM Cloud service installed. This solution also requires the HCX software to be installed on-premises for the vSphere environment, in which case the HCX on IBM Cloud instance must be ordered for licensing and activation of the on-premises HCX installation.

HCX Components

HCX comprised of a Cloud side and a client side install at a minimum. An instance of HCX must be deployed per vCenter, regardless of if the vCenters where HCX is to be deployed are linked in the same SSO domain on the client or cloud side. Site configurations supported by HCX Client to Cloud are; one to one, one to many, many to one and many to many. HCX has the concept of cloud side install and customer side install.

Cloud side = destination (VCF or vCenter as a Service on IBM Cloud). The cloud side install of HCX is the “slave” instance of an HCX client to cloud relationship. It is controlled by the customer-side install.

Customer side = Any vSphere instances(Source). The client side of the HCX install is the master which controls the cloud side slave instance via it’s vCenter web client UI.

HCX Manager

The Cloud side HCX Manager is the first part of an HCX install process , which need to be deployed on the cloud side by the IBM VMware Solutions automatically. Initially it is a single deployed OVA image file specific to the cloud side in conjunction with an NSX edge load balancer-firewall which is configured specifically for the HCX role. The HCX Manager is configured to listen for incoming client-side registration, management and control traffic via the configured NSX edge load balancer / firewall.

The Client side HCX Manager a client-side specific OVA image file which provides the UI functionality for managing and operating HCX. The client side HCX manager is responsible for registration with the cloud side HCX manager and creating a management plane between the client and cloud side. it is also responsible for deploying fleet components on the client side and instructing the cloud side to do the same.

HCX Interconnect Service

The interconnect service provides resilient access over the internet and private lines to the target site while providing strong encryption, traffic engineering and extending the data center. This service simplifies secure pairing of site and management of HCX components.

WAN Optimization – Improves performance characteristics of the private lines or internet paths by leveraging WAN optimization techniques like data de-duplication and line conditioning. This makes performance closer to a LAN environment.

Network Extension Service

High throughput Network Extension service with integrated Proximity Routing which unlocks seamless mobility and simple disaster recovery plans across sites.

HCX other components are responsible for creating the data and control planes between client and cloud side. Deployed as VMs in mirrored pairs, the component consists of the following:

Cloud Gateway: The Cloud Gateway is an optional component which is responsible for creating encrypted tunnels for vMotion and replication traffic.

Layer 2 Concentrator: The Layer 2 Concentrator is an optional component responsible for creating encrypted tunnels for the data and control plane corresponding to stretched layer 2 traffic. Each L2C pair can handle up to 4096 stretched networks. Additional L2C pairs can be deployed as needed.

WAN Optimizer: HCX includes an optionally deployed Silver Peak WAN optimization appliance. It is deployed as a VM appliance. When deployed the CGW tunnel traffic will be redirected to traverse the WAN Opt pair.

Proxy ESX host: Whenever the CGW is configured to connect to the cloud side HCX site, a proxy ESXi host will appear in vCenter outside of any cluster. This ESXi host has the same management and vMotion IP address as the corresponding CGW appliance.

HCX Licenses:

  1. Traffic on 80 and 443 must be allowed to https://connect.hcx.vmware.com
  2. A one-time use registration key will be provided for the client-side install provided via the IBM Cloud VMware Solutions portal. A key is required for each client side HCX installation.
  3. The Cloud side HCX registration is automatically completed by the IBM Cloud HCX deployment automation.

HCX Use Case:

  1. Migrate applications to IBM cloud seamlessly, securely and efficiently.
  2. Minimal need for long migration plans & application dependency mapping.
  3. Secure vMotion, Bulk migration, while keeping same IP/Networks.
  4. Securely Extend Datacenter to the IBM cloud.
  5. Extend networks, IP, Security policies and IT mgmt. to the IBM cloud.
  6. Securely protect – BC/DR via HCX

……………………………………………………………………………………………………………………………………………………….

Install and Configure UMDS 6.5

VMware vSphere Update Manager Download Service (UMDS) is an optional module of Update Manager. For security reasons and deployment restrictions, vSphere, including Update Manager, might be installed in a secured network that is disconnected from other local networks and the Internet. Update Manager requires access to patch information to function properly. If you are using such an environment, you can install UMDS on a computer that has Internet access to download upgrades, patch binaries, and patch metadata, and then export the downloads to a portable media drive or configure an IIS server  so that they become accessible to the Update Manager server.

Pre-requisite

  • Verify that the machine on which you install UMDS has Internet access, so that UMDS can download upgrades, patch metadata and patch binaries.

  • Uninstall older version of UMDS 1.0.x, UMDX 4.x or UMDS 5.x if it is installed on the machine.

  • Create a database instance and configure it before you install UMDS. When you install UMDS on a 64-bit machine, you must configure a 64-bit DSN and test it from ODBC. The database privileges and preparation steps are the same as the ones used for Update Manager.

  • UMDS and Update Manager must be installed on different machines.

Installation and Configuration

Mount the vCenter ISO and open it and double-click the autorun.exe file and select vSphere Update Manager > Download Service.

  • (Optional) : Select the option to Use Microsoft SQL Server 2012 Express as the embedded database, and click Install ( if you have not installed as per-requisite)

2017-11-27 14_27_36-10.139.59.233 - Remote Desktop Connection

2017-11-27 14_29_05-10.139.59.233 - Remote Desktop Connection2017-11-27 14_30_00-10.139.59.233 - Remote Desktop Connection.png

2017-11-27 14_32_35-10.139.59.233 - Remote Desktop Connection
2017-11-27 14_32_49-10.139.59.233 - Remote Desktop Connection
Specify the Update Manager Download Service proxy settings ( if using proxy ) and click Next.
2017-11-27 14_33_22-10.139.59.233 - Remote Desktop Connection.png
Select the Update Manager Download Service installation and patch download directories and click Next.
Patch Download directories should have around 150 GB free space for successful patch download.
2017-11-27 14_34_01-10.139.59.233 - Remote Desktop Connection.png

 In the warning message about the disk free space, click OK.

Click Install to begin the installation.

2017-11-27 14_34_12-10.139.59.233 - Remote Desktop Connection
Click Finish , that completes the installation.

Configuring UMDS

UMDS does not have a GUI interface. All configuration will be done via the command line. To begin configuration, open a command prompt and browse to the directory where UMDS is installed:

In my case it is installed  at C:\Program Files\VMware\Infrastructure\Update Manager>

1.png

# Run command vmware-umds -G  to view the patch store location, the proxy settings and which downloads are enabled. By default it is configured to download host patches for ESX(i) 4 , 5 and 6. i have disabled for 4 and 5.

2.png downloads can be disabled by running:

#vmware-umds -S -d embeddedEsx-5.0.0-INTL

Important commands:

You can enable or disable all host patch downloads you can run:

vmware-umds -S –enable-host
vmware-umds -S –disable-host
vmware-umds -S –enable-host –enable-va
vmware-umds -S –disable-host –disable-va

You can change the patch store location using:

vmware-umds -S –patch-store c:\Patches

New URLs can be added, or existing ones removed by using:

vmware-umds -S –add-url
vmware-umds -S –remove-url

To start downloading patches you can us:

vmware-umds -D

this command re-downloads the patches between set times to reduce load during core business hours.

vmware-umds -R –start-time 2010-11-01T00:00:00 –end-time 2010-11-30T23:59:59

Once you have your patches downloaded, the next step is to how to make them available to Update Manager. There are a couple of ways you can do this depending on your environment. If you wish to export all the downloaded patches to an external drive, for transfer to the Update Manager server, you can do so by running, for example:

vmware-umds -E –export-store e:\patchs  and then zip it or another way is to configure IIS web server and publish the patch location.

For IIS , Select Add Roles and Features in your Windows 2012 Server Manager and select the Web Server (IIS) checkbox.

After IIS installation is completed start Internet Information Services (IIS) Manager which is located in the Administrative Tools.Right click Default Web Site and select Add Virtual Directory , Choose an Alias for the Virtual Directory and select the patch store location as physical path.

3.png

In the MIME Types you need to add .vib and .sig as application/octet-stream type. Finally you need to enable Directory Browsing on the Virtual Directory.

4

and to start using this shared repository with the Update Manager, login to the vSphere Web Client go to Update Manager  then to go Admin View – Settings – Download settings and click Edit. Select Use a shared repository and enter the URL (http://IP address/FQDN/VIRTUAL_DIRECTORY).

5.png

After clicking OK it will validate the repository and download the metadata of the downloaded patches and then follow your existing process like creation of baselines  , attach the base lines and patch your hosts.

 

 

 

 

 

 

Deploy VMware Cloud Foundation on IBM Cloud – Part -1

IBM Cloud for VMware Solutions enables you to quickly and seamlessly integrate or migrate your on-premises VMware workloads to the IBM Cloud by using the scalable, secure, and high-performance IBM Cloud infrastructure and the industry-leading VMware hybrid virtualization technology.

IBM Cloud for VMware Solutions allows you to easily deploy your VMware virtual environments and manage the infrastructure resources on IBM Cloud. At the same time, you can still use your familiar native VMware product console to manage the VMware workloads.

Deployment Options:

IBM Cloud for VMware Solutions provides standardized and customizable deployment choices of VMware virtual environments. The following deployment types are offered:

  • VMware Cloud Foundation on IBM Cloud: The Cloud Foundation offering provides a unified VMware virtual environment by using standard IBM Cloud compute, storage, and network resources that are dedicated to each user deployment.
  • VMware vCenter Server on IBM Cloud: The vCenter Server offering allows you to deploy a VMware virtual environment by using custom compute, storage, and network resources to best fit your business needs.
  • VMware vSphere on IBM Cloud: The vSphere on IBM Cloud offering provides a customizable virtualization service that combines VMware-compatible bare metal servers, hardware components, and licenses, to build your own IBM-hosted VMware environment.

To use IBM Cloud for VMware Solutions to order instances, you must have an IBM Cloud account. The cost of the components that are ordered in your instances is billed to that IBM Cloud account.

When you order VMware Cloud Foundation on IBM Cloud, an entire VMware environment is deployed automatically. The base deployment consists of four IBM Cloud Bare Metal Servers with the VMware Cloud Foundation stack pre-installed and configured to provide unified software-defined data center (SDDC) platform. Cloud Foundation natively integrates VMware vSphere, VMware NSX, VMware Virtual SAN, and This has been architected based on VMware-validated designs.

The following image depicts the overall architecture and components of the Cloud Foundation deployment.

1.png

Physical infrastructure

Physical Infrastructure provides the physical compute, storage, and network resources to be used by the virtual infrastructure.

Virtualization infrastructure (Compute, Storage, and Network)

Virtualization infrastructure layer virtualizes the physical infrastructure through different VMware products:

  • VMware vSphere virtualizes the physical compute resources.
  • vSAN provides software-defined shared storage based on the storage in the physical servers.
  • VMware NSX is the network virtualization platform that provides logical networking components and virtual networks.

Virtualization Management

Virtualization Management consists of vCenter Server, which represents the management layer for the virtualized environment. The same familiar vSphere API-compatible tools and scripts can be used to manage the IBM hosted VMware environment.

On the IBM Cloud for VMware Solutions console, you can expand and contract the capacity of your instances using the add and remove ESXi server capability. In addition, lifecycle management functions like applying updates and upgrading the VMware components in the hosted environment are also available.

Cloud Foundation (VCF) Technical specifications

Hardware (options)

  • Small (Dual Intel Xeon E5-2650 v4 / 24 cores total, 2.20 GHz / 128 GB RAM)
  • Large (Dual Intel Xeon E5-2690 v4 / 28 cores total, 2.60 GHz / 512 GB RAM)
  • User customized (CPU model and RAM) ( up to 1.5 TB of Memory)

Networking

  • 10 Gbps dual public and private network uplinks
  • Three VLANs: one public and two private
  • Secure VMware NSX Edge Services Gateway

VSIs ( Virtual Server Instances)

  • One VSI for Windows AD and DNS services

Storage:

  • Small option: Two 1.9 TB SSD capacity disks
  • Large option: Four 3.8 TB SSD capacity disks
  • User customized option
    • Storage disk: 1.9 or 3.8 TB SSD
    • Disk quantity: 2, 4, 6, or 8
  • Included in all storage options
    • Two 1 TB SATA boot disks
    • Two 960 GB SSD cache disks
    • One RAID disk controller
  • One 2 TB shared block storage for backups that can be scaled up to 12 TB (you can choose whether you want the storage by selecting or deselecting the Veeam on IBM Cloud service)

IBM-provided VMware licenses or bring your own licenses (BYOL)

  • VMware vSphere Enterprise Plus 6.5u1
  • VMware vCenter Server 6.5
  • VMware NSX Enterprise 6.3
  • VMware vSAN (Advanced or Enterprise) 6.6
  • SDDC Manager licenses
  • Support and Services fee (one license per node)

As you must be aware that VCF has strict requirements on the physical infrastructure. That’s the reason, you can deploy instances only in IBM Cloud Data Centers that meet the requirements. VCF can be deployed in these cities data centers – Amsterdam, Chennai, Dallas, Frankfurt, HongKong, London, Melbourne, Queretaro, Milan, Montreal, Oslo, Paris, Sao Paulo, Seoul, San Jose, Singapore, Sydney, Tokyo, Toronto, Washington.

When you order a VCF instance, you can also order additional services on top of VCF

Veeam on IBM Cloud

This service helps you manage the backup and restore of all the virtual machines (VMs) in your environment, including the backup and restore of the management components.

F5 on IBM Cloud

This service optimizes performance and ensures availability and security for applications with the F5 BIG-IP Virtual Edition (VE).

FortiGate Security Appliance on IBM Cloud

This service deploys an HA-pair of FortiGate Security Appliance (FSA) 300 series devices that can provide firewall, routing, NAT, and VPN services to protect the public network connection to your environment.

Managed Services

These services enable IBM Integrated Managed Infrastructure (IMI) to deliver dynamic remote management services for a broad range of cloud infrastructures.

Zerto on IBM Cloud

This service provides replication and disaster recovery capabilities to help protect your workloads.

This post covers some basic information about VMware and VCF options available to IBM Cloud , in the next post i will deploy fully automated VCF , till then Happy learning 🙂

NSX-T 2.0 – Host Preparation

A fabric node is a node that has been registered with the NSX-T management plane and has NSX-T modules installed. For a hypervisor host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric.

As we know NSX-T is vCenter agnostic, the Host Switch is configured from the NSX manager UI. The NSX manager owns the life-cycle of the Host Switch and the Logical Switch creation on these Host Switches.

  • Go to Fabric – > Nodes and check the Hosts tab and Click on ADD

1

  • Enter Name of the Host , IP address of the host , choose OS tpye , since in this exercise i am adding an ESXi host , so chosen “ESXi” , Enter “root” credentials and most important enter the thumb print and to get the thumb print enter below command on ESXi command prompt – # openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout

6

  • Click Save.

2

  • If you do not enter the host thumbprint, the NSX-T UI prompts you to use the default thumbprint in the plain text format retrieved from the host.

3

Monitor the progress , it will install NSX binaries on Hosts.

  • Since i am deploying in my Home lab and my ESXi Host was having 12 GB RAM and installation failed because minimum RAM requirement is 16GB.

4

  • and finally vibs successfully installed.

5

Now lets add the host in to Management Plane

Joining the hypervisor hosts with the management plane ensures that the NSX Manager and the hosts can communicate with each other.

  • Open an SSH session to the NSX Manager appliance and  Log in with the Administrator credentials.On the NSX Manager appliance, run the get certificate api thumbprint command.  “The command output is a string of numbers that is unique to this NSX Manager.”  Copy this String

7

  • Now Open an SSH session to the hypervisor host and run the join management-plane command.

8

  • Provide the following information:
    •  Hostname or IP address of the NSX Manager with an optional port number
    • Username of the NSX Manager
    • Certificate thumbprint of the NSX Manager
    • Password of the NSX Manager

9

  • Command will prompt for Password for API user: <NSX-Manager’s-password> and if everything is fine then you should get “Node successfully joined”

1.png

Next post i will be targeting to prepare KVM host , till then Happy Learning 🙂