Upgrade NSX-T 2.3 to NSX-T 2.4

This blog is about how I upgraded my homelab from NSX-T from version 2.3 to version 2.4.First downloaded the NSX-T 2.4.x upgrade bundle (the MUB-file) from the My VMware download portal.

Checking prerequisites

before starting the upgrade , check the compatibly and ensure that the vCenter and ESXi versions are supported.

Upgrade NSX Manager Resources

In my lab the NSX-Manager VM was running with 2vCPU and 8 GB memory, with this new version minimum requirements went up to 4 vCPUS and to 16 GB of RAM.So in my case I had to shut down the NSX-T Manager and upgrade the specs to 6 vCPUs and 16 GB of memory as i was seeing 4 vCPUs were 100% getting utilised.

The Upgrade Process

Upload the MUB file to the NSX Manager1.png

After uploading the MUB file to the NSX Manager which takes some time, the file is validated and extracted which again takes little extra time. so in my Lab downloading, uploading, validating and extracting the upgrade bundle took around 40 – 50 minutes. so have patience.

Begin upgrade and accept EULA.23

After you click on “Begin Upgrade” , accept EUPA, it will throw a message , upgrade “upgrade corrdinator” , click on “Yes” and wait for some time , post “upgrade coordinator” update session will logout , login again , you will see a new upgrade interface: NOTE – Do not initiate multiple simultaneous upgrade processes for the upgrade coordinator.


Host Upgrade

Select the hosts and click on Start to start the hosts upgrade process.


While upgrade is going on continue to check you vCenter , at times ESXi host goes not go in maintenance mode due to some rule/restriction created by you , so help your ESXi server to in to maintenance mode.



Continue to monitor the progress once done successfully , move to “Next”.

Edge Upgrade

Clicking on “NEXT” will take you to Edge section , click on “Start” and continue to monitor the progress..


very simple and straight forward process , once upgrade completed , click on “Next”


Controller Upgrade

In NSX2.4 onwards , controllers has been merged in to NSX Manager ,so no need to upgrade controller , move ahead and upgrade NSX Manager.


NSX Manager Upgrade

upgrade of NSX Manager gives to two options:

  • Allow Transport Node connection after a single node cluster is formed.
  • Allow Transport Node connections only after three node cluster is formed.

choose option which is configured in your environment.


Accept the upgrade notification. You can safely ignore any upgrade related errors such as, HTTP service disruption that appears at this time. These errors appear because the Management plane is rebooting during the upgrading. Wait until the reboot finishes and the services are reestablished.

You can get in to CLI, log in to the NSX Manager to verify that the services have started run: #get service When the services start, the Service state appears as running. Some of the services include, SSH, install-upgrade, and manager.

Finally after around one hour in my Lab , Manager is also get updated successfully.


so this completes the upgrade process, check the health of every NSX-T component and if everything is green , you can now go ahead and shutdown and delete the NSX controller.Hope this help you in your NSX-T upgrade planning.






VMware PKS, NSX-T & Kubernetes Networking & Security explained

In continuation of my last post on VMware PKS and NSX-T explaining on getting started with VMware PKS and NSX-T (Getting Started with VMware PKS & NSX-T) , here is next one explaining around behind the scene NSX-T automation for Kubernetes by VMware NSX CNI plugin and PKS.

NSX-T address all the K8s networking functions, load balancing , IPAM , Routing and Firewalling needs, it supports complete automation and dynamic provisioning of network objects required for K8s and it’s workload and this is what i am going to uncover in this blog post.

Other features  like it has support for different topology choice for POD and NODE networks (NAT or NO-NAT) , it supports network security policies for Kubernetes , Clusters , Namespaces and individual services and it also supports network traceability/visibility using NSX-T in built operational tools for kubernetes.

I will be covering the deployment procedure of PKS after some time but just want to let explain that what happens on NSX-T side when you run “#pks create cluster” on PKS command line..and then when you create K8s Namespaces and PODs

pks create cluster

So when you run #pks create cluster with some argument , it goes to vCenter and deploys Kubernetes Master and Worker VMs based on specification you have chosen during deployment and on NSX-T side a new logical switch get created for these vms and get connected to these vms. (in this Example one K8s Master and 2 Nodes has been deployed) , along with logical switch , a Tier-1 cluster router get created which get connected to your organisation’s Tier-0 router.

K8s Master and Node Logical Switches


K8s Cluster connectivity towards Tier-0


Kubernetes Namespaces and NSX-T

Now if K8s cluster deployed successfully, Kubernetes cluster by default deploys three name space:

  • Default – The default namespace for objects with no other namespace.
  • kube-system – The namespace for objects created by the Kubernetes system.
  • kube-public – The namespace is created automatically and readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

for each default namespace, PKS automatically deploys and configures NSX-T Logical Switchs and each logical switch will have its own Tier-1 router connected to Tier-0.


pks-infrastructure Namespace and NSX-T

in the above figure you can clearly see “default”,”kube-public” and “kube-system” Logical Switches. there is another Logical Switch “pks-infrastructure” get created which is pks specific namespace and running pks related stuff like NSX-T CNI. “pks-infrastructure” is running NSX-NCP CNI plugin to integrate NSX-T with kubernetes.


kube-system Namespace & NSX-T

Typically, this runs pod like heapster , kube-dns , kubernetes-dashboard,  monitoring db , telemetry agent and stuff like ingresses and so on if you deploy so.


on NSX-T side as explained earlier a Logical switch get created for this Namespace and for each system POD a logical port get created by PKS on NSX-T.


Default Namespace & NSX-T

This is the cluster’s default namespace which is used for holding the default set of pods, services, and deployments used by the cluster. so when you deploy a POD without creating/specifying a new name space , “default Namespace” becomes default container to hold these pods and as explained earlier this also has its own NSX-T logical switch with a uplink port to Tier-1 router.


now when you deploy a Kubernetes pod without a new namespace , since that POD will be part of “Default Namespace”, PKS create a NSX-T logical port on the default logical switch.   let’s create a simple POD:


let’s go back to NSX-T’s  “Default Namespace” logical switch:


as you can see a new logical port has been created on default logical switch.

New Namespace & NSX-T

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. in simple terms Namespaces are like org vdc in vCD and Kubernetes best practice is to arrange PODs in namespaces. so when we create a new Namespace , what happens in NSX-T ?

i have created a new Namespace called “demo”.


if you observe below images left image showing default switches and right image is showing logical switches after creation of new Namespace.

and as you can see a new Logical switch has been created for new Namespace.

if you are creating PODs in default Namespace then all the pods get attached to default logical switch and if you are creating Namespace ( which is K8s best-practice) then a new logical switch get created and any POD which is getting deployed in this namespace will be part of its NSX-T logical switch and this new logical switch will also have its own Tier-1 router connecting to Tier-0 router.


Expose PODs to Outer World

in this example we deployed POD and get the internal network connectivity but internal only connectivity is not going to give access to this web server to outer world and this is default forbidden in kubernetes , so we need to expose this deployment using load balancer to the public interface on the specific port. let’s do that:



Lets browse this App using EXTERNAL-IP as you might know CLUSTER-IP is internal ip.


Kubernetes Cluster and NSX-T Load Balancer

As above when we expose this deployment using service on NSX-T there is a cluster load balancer get deployed automatically when we create cluster , on this load balancer NSX-CNI go ahead and add pod to virtual servers under a new load balancer VIP.


if we drill down to pool members of this VIP , we will see our kubernetes pod ep ips.


behind the scene when you deploy a cluster a LB logical switch and LB Tier-1 router which is having logical connectivity to the Load Balancer and Tier-0 Router , so that you can access the deployment externally.


This is what your Tier-0 will look like, having connectivity to all the Tier-1 and Tier-1 is having connected to Namespace logical switches.


these all logical switchs, Tier-1 router , Tier-0 router creations , their connectivity , LB creations etc all has been done automatically by NSX-T container (CNI) plugin and PKS. i was really thrilled when i tried this first time and it was so simple , if you understood the concept.

Kubernetes and Micro-segmentation

The NSX-T container plugin helps to exposure of container “Pods”as NSX-T logical switch /ports and because of this we can easily implement micro-segmentation rules. once “Pods ” expose to the NSX ecosystem, we can use the same approach we have with Virtual Machines for implementing micro segmentation and other security measures.


or you can use security groups based on tags to achieve micro segmentation.



This is what i have tried in this post to explain what happen behind the scene on NSX-T networking stack when you deploy and expose your applications on kubernetes and how we can bring in proven NSX-T based micro-segmentation.

Enjoy learning PKS,NSX-T and Kubernetes one of the best combination for Day-1 and Day-2 operation of kubernetes 🙂 and feel free to comment and suggestions.




Upgrade NSX-T 2.1 to NSX-T 2.3

I am working on PKS deployment and will soon sharing my deployment procedure on PKS but before proceeding with PKS deployment, i need to upgrade my NSX-T lab environment to support latest PKS as per below compatibility matrix.

PKS Version Compatible NSX-T Versions Compatible Ops Manager Versions
v1.2 v2.2, v2.3 v2.2.2+, v2.3.1+
v1.1.6 v2.1, v2.2 v2.1.x, 2.2.x
v1.1.5 v2.1, v2.2 v2.1.x, v2.2.x
v1.1.4 v2.1 v2.1.x, 2.2.x
v1.1.3 v2.1 v2.1.0 – 2.1.6
v1.1.2 v2.1 v2.1.x, 2.2.x
v1.1.1 v2.1 – Advanced Edition v2.1.0 – 2.1.6

In this post i will be covering the procedure to upgrade NSX-T 2.1 to NSX-T 2.3.

So Before proceeding for upgrade , lets check the health of current deployment which is very important because if we start upgrading the environment and once upgrade is completed and after upgrade if some thing is not working , we will not come to know whether before upgrade it was working or not , so lets get in to validation of health and version checks.

Validate Current Version Components Health

First thing to check the Management Cluster and Controller connectivity and ensure they are up.

Next is to Validate host deployment status and connectivity.


Check the Edge health


Lets check the Transport Node Health


Upgrade Procedure

Now Download the upgrade bundle


Go to NSX Manager and browse to Upgrade


Upload the downloaded upgrade bundle file in NSX Manager


Since upgrade bundle is very big in size , it will take lots of time in upload, extraction and verification.Once the package has uploaded, click to “BEGIN UPGRADE”.


The upgrade coordinator will then check the install for any potential issues. In my environment there is one warnings for the Edge that the connectivity is degraded – this is because of i have disconnected 4 th nic which is safe to ignore, so when you are doing for your environment , please access all the warnings and take necessary actions before proceeding with upgrade.


Click Next will take you to view the Hosts Upgrade page. Here you can define the order and method of upgrade for each host, and define host groups to control the order of upgrade. I’ve gone with the defaults, serial (one at a time) upgrades over the parallel because i have two hosts in each clusters.

Click START to begin the upgrade, and the hosts will be put in maintenance mode, then upgraded and rebooted if necessary. ensure you need to have DRS enabled and the VMs on the hosts must be able to vMotion off of the host being put in maintenance mode. Once the host has upgraded, and the Management Plane Agent has reported back to the Manager, the Upgrade Coordinator will move on to the next host in the group.


Once the hosts are upgraded, click next to move to the Edge Upgrade page. Edge Clusters can be upgraded parallel if you have multiple edge clusters, but the Edges which has formed the Edge Clusters and upgraded serially to ensure connectivity is maintained. In my lab , i have a single Edge Cluster with two Edge VMs, so this will be upgraded one Edge at a time.Click on the “START” to start the edge upgrade process.



Once the Edge Cluster has been upgraded successfully, click NEXT to move to the Controller Node Upgrade Page. here you can’t change the sequence of upgrade of the controllers, controllers are done in parallel by default. (in my Lab i am running a single controller because of resource constraint but in production you will see three controllers deployed in a cluster). Click on “START” to begin the upgrade process.


Once the controller upgrade has been completed, click NEXT to move to the NSX Manager upgrade page. The NSX Manager will become unavailable for about 5 minutes after you click START and it might take 15 to 20 minutes to upgrade the manager.


Once the Manager upgrade has completed. review the upgrade cycle.


you can re-validate the installation as we did at the start of the upgrade, checking that we have all the green lights on, and the version of components have increased.

VMware HCX on IBM Cloud

I think now goal of every enterprise organization is to be able to accelerate their time to business value for their  customers by taking advantage of cloud and most of the enterprise customer data center are built on VMware platform and while considering cloud adoption , organizations are facing challenges listed as below:

  • Incompatible environments – Currently customers have various VMware vSphere versions deployed in on-premises and if these customers would like to move to the latest build of SDDC based cloud but facing trouble due to differences in versions, networking architectures, server CPUs etc.
  • Network complexity to the cloud – Internet/WAN technologies are complex, non-automated, and inconsistent. Setting up and maintaining VPNs, firewalls, direct connects and routing across network/co-location provider networks, and enterprise networks is not easy.
  • Complex application dependencies – Enterprise applications are complex and these applications interact with various other servers in the datacenter such as storage , databases, other solutions, DMZ, serious security solutions and platform applications. Application dependencies are hard to assess, customers face lots of issues in addressing these issues.

HCX enables enterprise to the over come above challenges. Delivered as a VMware Cloud Service, and available through IBM Cloud, the service enables secure, seamless interoperability and hybridity between any VMware vSphere based clouds, enabling large-scale migration to modern clouds/datacenters, and application mobility, with no application downtime. Built-in DR & security enables unified compliance, governance and control. Automated deployment makes it straightforward for service providers and enterprises to rapidly get to the desired state and shorten time to value.


HCX on IBM Cloud

HCX on IBM Cloud unlocks the IBM Cloud for on-premises vSphere environments, where you can build an abstraction layer between your on-premises data centers and the IBM Cloud. Once done, networks can be stretched securely across the HCX hybrid interconnect, which enables seamless mobility of virtual machines. HCX enables hybrid capabilities in vCenter so that workloads can be protected and migrated to and from the IBM Cloud.

HCX uses vSphere Replication to provide robust replication capabilities at the hypervisor layer , which is coupled with HCX’s ability to stretch layer 2 network, no intrusion change at the OS or application layers is required. This allows VMs to be easily migrated to and from the IBM Cloud without any change. HCX also optimizes migration traffic with inbuilt WAN optimization.

The HCX solution requires at least one VCF V2.1 or vCenter Server instance running vSphere 6.5 with the HCX on IBM Cloud service installed. This solution also requires the HCX software to be installed on-premises for the vSphere environment, in which case the HCX on IBM Cloud instance must be ordered for licensing and activation of the on-premises HCX installation.

HCX Components

HCX comprised of a Cloud side and a client side install at a minimum. An instance of HCX must be deployed per vCenter, regardless of if the vCenters where HCX is to be deployed are linked in the same SSO domain on the client or cloud side. Site configurations supported by HCX Client to Cloud are; one to one, one to many, many to one and many to many. HCX has the concept of cloud side install and customer side install.

Cloud side = destination (VCF or vCenter as a Service on IBM Cloud). The cloud side install of HCX is the “slave” instance of an HCX client to cloud relationship. It is controlled by the customer-side install.

Customer side = Any vSphere instances(Source). The client side of the HCX install is the master which controls the cloud side slave instance via it’s vCenter web client UI.

HCX Manager

The Cloud side HCX Manager is the first part of an HCX install process , which need to be deployed on the cloud side by the IBM VMware Solutions automatically. Initially it is a single deployed OVA image file specific to the cloud side in conjunction with an NSX edge load balancer-firewall which is configured specifically for the HCX role. The HCX Manager is configured to listen for incoming client-side registration, management and control traffic via the configured NSX edge load balancer / firewall.

The Client side HCX Manager a client-side specific OVA image file which provides the UI functionality for managing and operating HCX. The client side HCX manager is responsible for registration with the cloud side HCX manager and creating a management plane between the client and cloud side. it is also responsible for deploying fleet components on the client side and instructing the cloud side to do the same.

HCX Interconnect Service

The interconnect service provides resilient access over the internet and private lines to the target site while providing strong encryption, traffic engineering and extending the data center. This service simplifies secure pairing of site and management of HCX components.

WAN Optimization – Improves performance characteristics of the private lines or internet paths by leveraging WAN optimization techniques like data de-duplication and line conditioning. This makes performance closer to a LAN environment.

Network Extension Service

High throughput Network Extension service with integrated Proximity Routing which unlocks seamless mobility and simple disaster recovery plans across sites.

HCX other components are responsible for creating the data and control planes between client and cloud side. Deployed as VMs in mirrored pairs, the component consists of the following:

Cloud Gateway: The Cloud Gateway is an optional component which is responsible for creating encrypted tunnels for vMotion and replication traffic.

Layer 2 Concentrator: The Layer 2 Concentrator is an optional component responsible for creating encrypted tunnels for the data and control plane corresponding to stretched layer 2 traffic. Each L2C pair can handle up to 4096 stretched networks. Additional L2C pairs can be deployed as needed.

WAN Optimizer: HCX includes an optionally deployed Silver Peak WAN optimization appliance. It is deployed as a VM appliance. When deployed the CGW tunnel traffic will be redirected to traverse the WAN Opt pair.

Proxy ESX host: Whenever the CGW is configured to connect to the cloud side HCX site, a proxy ESXi host will appear in vCenter outside of any cluster. This ESXi host has the same management and vMotion IP address as the corresponding CGW appliance.

HCX Licenses:

  1. Traffic on 80 and 443 must be allowed to https://connect.hcx.vmware.com
  2. A one-time use registration key will be provided for the client-side install provided via the IBM Cloud VMware Solutions portal. A key is required for each client side HCX installation.
  3. The Cloud side HCX registration is automatically completed by the IBM Cloud HCX deployment automation.

HCX Use Case:

  1. Migrate applications to IBM cloud seamlessly, securely and efficiently.
  2. Minimal need for long migration plans & application dependency mapping.
  3. Secure vMotion, Bulk migration, while keeping same IP/Networks.
  4. Securely Extend Datacenter to the IBM cloud.
  5. Extend networks, IP, Security policies and IT mgmt. to the IBM cloud.
  6. Securely protect – BC/DR via HCX


Install and Configure UMDS 6.5

VMware vSphere Update Manager Download Service (UMDS) is an optional module of Update Manager. For security reasons and deployment restrictions, vSphere, including Update Manager, might be installed in a secured network that is disconnected from other local networks and the Internet. Update Manager requires access to patch information to function properly. If you are using such an environment, you can install UMDS on a computer that has Internet access to download upgrades, patch binaries, and patch metadata, and then export the downloads to a portable media drive or configure an IIS server  so that they become accessible to the Update Manager server.


  • Verify that the machine on which you install UMDS has Internet access, so that UMDS can download upgrades, patch metadata and patch binaries.

  • Uninstall older version of UMDS 1.0.x, UMDX 4.x or UMDS 5.x if it is installed on the machine.

  • Create a database instance and configure it before you install UMDS. When you install UMDS on a 64-bit machine, you must configure a 64-bit DSN and test it from ODBC. The database privileges and preparation steps are the same as the ones used for Update Manager.

  • UMDS and Update Manager must be installed on different machines.

Installation and Configuration

Mount the vCenter ISO and open it and double-click the autorun.exe file and select vSphere Update Manager > Download Service.

  • (Optional) : Select the option to Use Microsoft SQL Server 2012 Express as the embedded database, and click Install ( if you have not installed as per-requisite)

2017-11-27 14_27_36- - Remote Desktop Connection

2017-11-27 14_29_05- - Remote Desktop Connection2017-11-27 14_30_00- - Remote Desktop Connection.png

2017-11-27 14_32_35- - Remote Desktop Connection
2017-11-27 14_32_49- - Remote Desktop Connection
Specify the Update Manager Download Service proxy settings ( if using proxy ) and click Next.
2017-11-27 14_33_22- - Remote Desktop Connection.png
Select the Update Manager Download Service installation and patch download directories and click Next.
Patch Download directories should have around 150 GB free space for successful patch download.
2017-11-27 14_34_01- - Remote Desktop Connection.png

 In the warning message about the disk free space, click OK.

Click Install to begin the installation.

2017-11-27 14_34_12- - Remote Desktop Connection
Click Finish , that completes the installation.

Configuring UMDS

UMDS does not have a GUI interface. All configuration will be done via the command line. To begin configuration, open a command prompt and browse to the directory where UMDS is installed:

In my case it is installed  at C:\Program Files\VMware\Infrastructure\Update Manager>


# Run command vmware-umds -G  to view the patch store location, the proxy settings and which downloads are enabled. By default it is configured to download host patches for ESX(i) 4 , 5 and 6. i have disabled for 4 and 5.

2.png downloads can be disabled by running:

#vmware-umds -S -d embeddedEsx-5.0.0-INTL

Important commands:

You can enable or disable all host patch downloads you can run:

vmware-umds -S –enable-host
vmware-umds -S –disable-host
vmware-umds -S –enable-host –enable-va
vmware-umds -S –disable-host –disable-va

You can change the patch store location using:

vmware-umds -S –patch-store c:\Patches

New URLs can be added, or existing ones removed by using:

vmware-umds -S –add-url
vmware-umds -S –remove-url

To start downloading patches you can us:

vmware-umds -D

this command re-downloads the patches between set times to reduce load during core business hours.

vmware-umds -R –start-time 2010-11-01T00:00:00 –end-time 2010-11-30T23:59:59

Once you have your patches downloaded, the next step is to how to make them available to Update Manager. There are a couple of ways you can do this depending on your environment. If you wish to export all the downloaded patches to an external drive, for transfer to the Update Manager server, you can do so by running, for example:

vmware-umds -E –export-store e:\patchs  and then zip it or another way is to configure IIS web server and publish the patch location.

For IIS , Select Add Roles and Features in your Windows 2012 Server Manager and select the Web Server (IIS) checkbox.

After IIS installation is completed start Internet Information Services (IIS) Manager which is located in the Administrative Tools.Right click Default Web Site and select Add Virtual Directory , Choose an Alias for the Virtual Directory and select the patch store location as physical path.


In the MIME Types you need to add .vib and .sig as application/octet-stream type. Finally you need to enable Directory Browsing on the Virtual Directory.


and to start using this shared repository with the Update Manager, login to the vSphere Web Client go to Update Manager  then to go Admin View – Settings – Download settings and click Edit. Select Use a shared repository and enter the URL (http://IP address/FQDN/VIRTUAL_DIRECTORY).


After clicking OK it will validate the repository and download the metadata of the downloaded patches and then follow your existing process like creation of baselines  , attach the base lines and patch your hosts.







Deploy VMware Cloud Foundation on IBM Cloud – Part -1

IBM Cloud for VMware Solutions enables you to quickly and seamlessly integrate or migrate your on-premises VMware workloads to the IBM Cloud by using the scalable, secure, and high-performance IBM Cloud infrastructure and the industry-leading VMware hybrid virtualization technology.

IBM Cloud for VMware Solutions allows you to easily deploy your VMware virtual environments and manage the infrastructure resources on IBM Cloud. At the same time, you can still use your familiar native VMware product console to manage the VMware workloads.

Deployment Options:

IBM Cloud for VMware Solutions provides standardized and customizable deployment choices of VMware virtual environments. The following deployment types are offered:

  • VMware Cloud Foundation on IBM Cloud: The Cloud Foundation offering provides a unified VMware virtual environment by using standard IBM Cloud compute, storage, and network resources that are dedicated to each user deployment.
  • VMware vCenter Server on IBM Cloud: The vCenter Server offering allows you to deploy a VMware virtual environment by using custom compute, storage, and network resources to best fit your business needs.
  • VMware vSphere on IBM Cloud: The vSphere on IBM Cloud offering provides a customizable virtualization service that combines VMware-compatible bare metal servers, hardware components, and licenses, to build your own IBM-hosted VMware environment.

To use IBM Cloud for VMware Solutions to order instances, you must have an IBM Cloud account. The cost of the components that are ordered in your instances is billed to that IBM Cloud account.

When you order VMware Cloud Foundation on IBM Cloud, an entire VMware environment is deployed automatically. The base deployment consists of four IBM Cloud Bare Metal Servers with the VMware Cloud Foundation stack pre-installed and configured to provide unified software-defined data center (SDDC) platform. Cloud Foundation natively integrates VMware vSphere, VMware NSX, VMware Virtual SAN, and This has been architected based on VMware-validated designs.

The following image depicts the overall architecture and components of the Cloud Foundation deployment.


Physical infrastructure

Physical Infrastructure provides the physical compute, storage, and network resources to be used by the virtual infrastructure.

Virtualization infrastructure (Compute, Storage, and Network)

Virtualization infrastructure layer virtualizes the physical infrastructure through different VMware products:

  • VMware vSphere virtualizes the physical compute resources.
  • vSAN provides software-defined shared storage based on the storage in the physical servers.
  • VMware NSX is the network virtualization platform that provides logical networking components and virtual networks.

Virtualization Management

Virtualization Management consists of vCenter Server, which represents the management layer for the virtualized environment. The same familiar vSphere API-compatible tools and scripts can be used to manage the IBM hosted VMware environment.

On the IBM Cloud for VMware Solutions console, you can expand and contract the capacity of your instances using the add and remove ESXi server capability. In addition, lifecycle management functions like applying updates and upgrading the VMware components in the hosted environment are also available.

Cloud Foundation (VCF) Technical specifications

Hardware (options)

  • Small (Dual Intel Xeon E5-2650 v4 / 24 cores total, 2.20 GHz / 128 GB RAM)
  • Large (Dual Intel Xeon E5-2690 v4 / 28 cores total, 2.60 GHz / 512 GB RAM)
  • User customized (CPU model and RAM) ( up to 1.5 TB of Memory)


  • 10 Gbps dual public and private network uplinks
  • Three VLANs: one public and two private
  • Secure VMware NSX Edge Services Gateway

VSIs ( Virtual Server Instances)

  • One VSI for Windows AD and DNS services


  • Small option: Two 1.9 TB SSD capacity disks
  • Large option: Four 3.8 TB SSD capacity disks
  • User customized option
    • Storage disk: 1.9 or 3.8 TB SSD
    • Disk quantity: 2, 4, 6, or 8
  • Included in all storage options
    • Two 1 TB SATA boot disks
    • Two 960 GB SSD cache disks
    • One RAID disk controller
  • One 2 TB shared block storage for backups that can be scaled up to 12 TB (you can choose whether you want the storage by selecting or deselecting the Veeam on IBM Cloud service)

IBM-provided VMware licenses or bring your own licenses (BYOL)

  • VMware vSphere Enterprise Plus 6.5u1
  • VMware vCenter Server 6.5
  • VMware NSX Enterprise 6.3
  • VMware vSAN (Advanced or Enterprise) 6.6
  • SDDC Manager licenses
  • Support and Services fee (one license per node)

As you must be aware that VCF has strict requirements on the physical infrastructure. That’s the reason, you can deploy instances only in IBM Cloud Data Centers that meet the requirements. VCF can be deployed in these cities data centers – Amsterdam, Chennai, Dallas, Frankfurt, HongKong, London, Melbourne, Queretaro, Milan, Montreal, Oslo, Paris, Sao Paulo, Seoul, San Jose, Singapore, Sydney, Tokyo, Toronto, Washington.

When you order a VCF instance, you can also order additional services on top of VCF

Veeam on IBM Cloud

This service helps you manage the backup and restore of all the virtual machines (VMs) in your environment, including the backup and restore of the management components.

F5 on IBM Cloud

This service optimizes performance and ensures availability and security for applications with the F5 BIG-IP Virtual Edition (VE).

FortiGate Security Appliance on IBM Cloud

This service deploys an HA-pair of FortiGate Security Appliance (FSA) 300 series devices that can provide firewall, routing, NAT, and VPN services to protect the public network connection to your environment.

Managed Services

These services enable IBM Integrated Managed Infrastructure (IMI) to deliver dynamic remote management services for a broad range of cloud infrastructures.

Zerto on IBM Cloud

This service provides replication and disaster recovery capabilities to help protect your workloads.

This post covers some basic information about VMware and VCF options available to IBM Cloud , in the next post i will deploy fully automated VCF , till then Happy learning 🙂

NSX-T 2.0 – Host Preparation

A fabric node is a node that has been registered with the NSX-T management plane and has NSX-T modules installed. For a hypervisor host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric.

As we know NSX-T is vCenter agnostic, the Host Switch is configured from the NSX manager UI. The NSX manager owns the life-cycle of the Host Switch and the Logical Switch creation on these Host Switches.

  • Go to Fabric – > Nodes and check the Hosts tab and Click on ADD


  • Enter Name of the Host , IP address of the host , choose OS tpye , since in this exercise i am adding an ESXi host , so chosen “ESXi” , Enter “root” credentials and most important enter the thumb print and to get the thumb print enter below command on ESXi command prompt – # openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout


  • Click Save.


  • If you do not enter the host thumbprint, the NSX-T UI prompts you to use the default thumbprint in the plain text format retrieved from the host.


Monitor the progress , it will install NSX binaries on Hosts.

  • Since i am deploying in my Home lab and my ESXi Host was having 12 GB RAM and installation failed because minimum RAM requirement is 16GB.


  • and finally vibs successfully installed.


Now lets add the host in to Management Plane

Joining the hypervisor hosts with the management plane ensures that the NSX Manager and the hosts can communicate with each other.

  • Open an SSH session to the NSX Manager appliance and  Log in with the Administrator credentials.On the NSX Manager appliance, run the get certificate api thumbprint command.  “The command output is a string of numbers that is unique to this NSX Manager.”  Copy this String


  • Now Open an SSH session to the hypervisor host and run the join management-plane command.


  • Provide the following information:
    •  Hostname or IP address of the NSX Manager with an optional port number
    • Username of the NSX Manager
    • Certificate thumbprint of the NSX Manager
    • Password of the NSX Manager


  • Command will prompt for Password for API user: <NSX-Manager’s-password> and if everything is fine then you should get “Node successfully joined”


Next post i will be targeting to prepare KVM host , till then Happy Learning 🙂

1 2 8
%d bloggers like this: