Category: NSX

Upgrade NSX-T 2.3 to NSX-T 2.4

This blog is about how I upgraded my homelab from NSX-T from version 2.3 to version 2.4.First downloaded the NSX-T 2.4.x upgrade bundle (the MUB-file) from the My VMware download portal.

Checking prerequisites

before starting the upgrade , check the compatibly and ensure that the vCenter and ESXi versions are supported.

Upgrade NSX Manager Resources

In my lab the NSX-Manager VM was running with 2vCPU and 8 GB memory, with this new version minimum requirements went up to 4 vCPUS and to 16 GB of RAM.So in my case I had to shut down the NSX-T Manager and upgrade the specs to 6 vCPUs and 16 GB of memory as i was seeing 4 vCPUs were 100% getting utilised.

The Upgrade Process

Upload the MUB file to the NSX Manager1.png

After uploading the MUB file to the NSX Manager which takes some time, the file is validated and extracted which again takes little extra time. so in my Lab downloading, uploading, validating and extracting the upgrade bundle took around 40 – 50 minutes. so have patience.

Begin upgrade and accept EULA.23

After you click on “Begin Upgrade” , accept EUPA, it will throw a message , upgrade “upgrade corrdinator” , click on “Yes” and wait for some time , post “upgrade coordinator” update session will logout , login again , you will see a new upgrade interface: NOTE – Do not initiate multiple simultaneous upgrade processes for the upgrade coordinator.

4.png

Host Upgrade

Select the hosts and click on Start to start the hosts upgrade process.

5.png

While upgrade is going on continue to check you vCenter , at times ESXi host goes not go in maintenance mode due to some rule/restriction created by you , so help your ESXi server to in to maintenance mode.

6.png

7.png

Continue to monitor the progress once done successfully , move to “Next”.

Edge Upgrade

Clicking on “NEXT” will take you to Edge section , click on “Start” and continue to monitor the progress..

8.png

very simple and straight forward process , once upgrade completed , click on “Next”

9.png

Controller Upgrade

In NSX2.4 onwards , controllers has been merged in to NSX Manager ,so no need to upgrade controller , move ahead and upgrade NSX Manager.

10.png

NSX Manager Upgrade

upgrade of NSX Manager gives to two options:

  • Allow Transport Node connection after a single node cluster is formed.
  • Allow Transport Node connections only after three node cluster is formed.

choose option which is configured in your environment.

11

Accept the upgrade notification. You can safely ignore any upgrade related errors such as, HTTP service disruption that appears at this time. These errors appear because the Management plane is rebooting during the upgrading. Wait until the reboot finishes and the services are reestablished.

You can get in to CLI, log in to the NSX Manager to verify that the services have started run: #get service When the services start, the Service state appears as running. Some of the services include, SSH, install-upgrade, and manager.

Finally after around one hour in my Lab , Manager is also get updated successfully.

12.png

so this completes the upgrade process, check the health of every NSX-T component and if everything is green , you can now go ahead and shutdown and delete the NSX controller.Hope this help you in your NSX-T upgrade planning.

 

 

 

 

VMware PKS, NSX-T & Kubernetes Networking & Security explained

In continuation of my last post on VMware PKS and NSX-T explaining on getting started with VMware PKS and NSX-T (Getting Started with VMware PKS & NSX-T) , here is next one explaining around behind the scene NSX-T automation for Kubernetes by VMware NSX CNI plugin and PKS.

NSX-T address all the K8s networking functions, load balancing , IPAM , Routing and Firewalling needs, it supports complete automation and dynamic provisioning of network objects required for K8s and it’s workload and this is what i am going to uncover in this blog post.

Other features  like it has support for different topology choice for POD and NODE networks (NAT or NO-NAT) , it supports network security policies for Kubernetes , Clusters , Namespaces and individual services and it also supports network traceability/visibility using NSX-T in built operational tools for kubernetes.

I will be covering the deployment procedure of PKS after some time but just want to let explain that what happens on NSX-T side when you run “#pks create cluster” on PKS command line..and then when you create K8s Namespaces and PODs

pks create cluster

So when you run #pks create cluster with some argument , it goes to vCenter and deploys Kubernetes Master and Worker VMs based on specification you have chosen during deployment and on NSX-T side a new logical switch get created for these vms and get connected to these vms. (in this Example one K8s Master and 2 Nodes has been deployed) , along with logical switch , a Tier-1 cluster router get created which get connected to your organisation’s Tier-0 router.

K8s Master and Node Logical Switches

13.png

K8s Cluster connectivity towards Tier-0

12.png

Kubernetes Namespaces and NSX-T

Now if K8s cluster deployed successfully, Kubernetes cluster by default deploys three name space:

  • Default – The default namespace for objects with no other namespace.
  • kube-system – The namespace for objects created by the Kubernetes system.
  • kube-public – The namespace is created automatically and readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

for each default namespace, PKS automatically deploys and configures NSX-T Logical Switchs and each logical switch will have its own Tier-1 router connected to Tier-0.

14.png

pks-infrastructure Namespace and NSX-T

in the above figure you can clearly see “default”,”kube-public” and “kube-system” Logical Switches. there is another Logical Switch “pks-infrastructure” get created which is pks specific namespace and running pks related stuff like NSX-T CNI. “pks-infrastructure” is running NSX-NCP CNI plugin to integrate NSX-T with kubernetes.

15.png

kube-system Namespace & NSX-T

Typically, this runs pod like heapster , kube-dns , kubernetes-dashboard,  monitoring db , telemetry agent and stuff like ingresses and so on if you deploy so.

16.png

on NSX-T side as explained earlier a Logical switch get created for this Namespace and for each system POD a logical port get created by PKS on NSX-T.

17.png

Default Namespace & NSX-T

This is the cluster’s default namespace which is used for holding the default set of pods, services, and deployments used by the cluster. so when you deploy a POD without creating/specifying a new name space , “default Namespace” becomes default container to hold these pods and as explained earlier this also has its own NSX-T logical switch with a uplink port to Tier-1 router.

18.png

now when you deploy a Kubernetes pod without a new namespace , since that POD will be part of “Default Namespace”, PKS create a NSX-T logical port on the default logical switch.   let’s create a simple POD:

19.png

let’s go back to NSX-T’s  “Default Namespace” logical switch:

20.png

as you can see a new logical port has been created on default logical switch.

New Namespace & NSX-T

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. in simple terms Namespaces are like org vdc in vCD and Kubernetes best practice is to arrange PODs in namespaces. so when we create a new Namespace , what happens in NSX-T ?

i have created a new Namespace called “demo”.

23.png

if you observe below images left image showing default switches and right image is showing logical switches after creation of new Namespace.

and as you can see a new Logical switch has been created for new Namespace.

if you are creating PODs in default Namespace then all the pods get attached to default logical switch and if you are creating Namespace ( which is K8s best-practice) then a new logical switch get created and any POD which is getting deployed in this namespace will be part of its NSX-T logical switch and this new logical switch will also have its own Tier-1 router connecting to Tier-0 router.

24.png

Expose PODs to Outer World

in this example we deployed POD and get the internal network connectivity but internal only connectivity is not going to give access to this web server to outer world and this is default forbidden in kubernetes , so we need to expose this deployment using load balancer to the public interface on the specific port. let’s do that:

26.png

27.png

Lets browse this App using EXTERNAL-IP as you might know CLUSTER-IP is internal ip.

33.png

Kubernetes Cluster and NSX-T Load Balancer

As above when we expose this deployment using service on NSX-T there is a cluster load balancer get deployed automatically when we create cluster , on this load balancer NSX-CNI go ahead and add pod to virtual servers under a new load balancer VIP.

28.png

if we drill down to pool members of this VIP , we will see our kubernetes pod ep ips.

29.png

behind the scene when you deploy a cluster a LB logical switch and LB Tier-1 router which is having logical connectivity to the Load Balancer and Tier-0 Router , so that you can access the deployment externally.

3031

This is what your Tier-0 will look like, having connectivity to all the Tier-1 and Tier-1 is having connected to Namespace logical switches.

32.png

these all logical switchs, Tier-1 router , Tier-0 router creations , their connectivity , LB creations etc all has been done automatically by NSX-T container (CNI) plugin and PKS. i was really thrilled when i tried this first time and it was so simple , if you understood the concept.

Kubernetes and Micro-segmentation

The NSX-T container plugin helps to exposure of container “Pods”as NSX-T logical switch /ports and because of this we can easily implement micro-segmentation rules. once “Pods ” expose to the NSX ecosystem, we can use the same approach we have with Virtual Machines for implementing micro segmentation and other security measures.

3435

or you can use security groups based on tags to achieve micro segmentation.

3637

 

This is what i have tried in this post to explain what happen behind the scene on NSX-T networking stack when you deploy and expose your applications on kubernetes and how we can bring in proven NSX-T based micro-segmentation.

Enjoy learning PKS,NSX-T and Kubernetes one of the best combination for Day-1 and Day-2 operation of kubernetes 🙂 and feel free to comment and suggestions.

 

 

 

Upgrade NSX-T 2.1 to NSX-T 2.3

I am working on PKS deployment and will soon sharing my deployment procedure on PKS but before proceeding with PKS deployment, i need to upgrade my NSX-T lab environment to support latest PKS as per below compatibility matrix.

PKS Version Compatible NSX-T Versions Compatible Ops Manager Versions
v1.2 v2.2, v2.3 v2.2.2+, v2.3.1+
v1.1.6 v2.1, v2.2 v2.1.x, 2.2.x
v1.1.5 v2.1, v2.2 v2.1.x, v2.2.x
v1.1.4 v2.1 v2.1.x, 2.2.x
v1.1.3 v2.1 v2.1.0 – 2.1.6
v1.1.2 v2.1 v2.1.x, 2.2.x
v1.1.1 v2.1 – Advanced Edition v2.1.0 – 2.1.6

In this post i will be covering the procedure to upgrade NSX-T 2.1 to NSX-T 2.3.

So Before proceeding for upgrade , lets check the health of current deployment which is very important because if we start upgrading the environment and once upgrade is completed and after upgrade if some thing is not working , we will not come to know whether before upgrade it was working or not , so lets get in to validation of health and version checks.

Validate Current Version Components Health

First thing to check the Management Cluster and Controller connectivity and ensure they are up.

Next is to Validate host deployment status and connectivity.

34

Check the Edge health

5.png

Lets check the Transport Node Health

6

Upgrade Procedure

Now Download the upgrade bundle

7.png

Go to NSX Manager and browse to Upgrade

8.png

Upload the downloaded upgrade bundle file in NSX Manager

9.png

Since upgrade bundle is very big in size , it will take lots of time in upload, extraction and verification.Once the package has uploaded, click to “BEGIN UPGRADE”.

11

The upgrade coordinator will then check the install for any potential issues. In my environment there is one warnings for the Edge that the connectivity is degraded – this is because of i have disconnected 4 th nic which is safe to ignore, so when you are doing for your environment , please access all the warnings and take necessary actions before proceeding with upgrade.

12

Click Next will take you to view the Hosts Upgrade page. Here you can define the order and method of upgrade for each host, and define host groups to control the order of upgrade. I’ve gone with the defaults, serial (one at a time) upgrades over the parallel because i have two hosts in each clusters.

Click START to begin the upgrade, and the hosts will be put in maintenance mode, then upgraded and rebooted if necessary. ensure you need to have DRS enabled and the VMs on the hosts must be able to vMotion off of the host being put in maintenance mode. Once the host has upgraded, and the Management Plane Agent has reported back to the Manager, the Upgrade Coordinator will move on to the next host in the group.

13.png

Once the hosts are upgraded, click next to move to the Edge Upgrade page. Edge Clusters can be upgraded parallel if you have multiple edge clusters, but the Edges which has formed the Edge Clusters and upgraded serially to ensure connectivity is maintained. In my lab , i have a single Edge Cluster with two Edge VMs, so this will be upgraded one Edge at a time.Click on the “START” to start the edge upgrade process.

14

15

Once the Edge Cluster has been upgraded successfully, click NEXT to move to the Controller Node Upgrade Page. here you can’t change the sequence of upgrade of the controllers, controllers are done in parallel by default. (in my Lab i am running a single controller because of resource constraint but in production you will see three controllers deployed in a cluster). Click on “START” to begin the upgrade process.

16.png

Once the controller upgrade has been completed, click NEXT to move to the NSX Manager upgrade page. The NSX Manager will become unavailable for about 5 minutes after you click START and it might take 15 to 20 minutes to upgrade the manager.

17.png

Once the Manager upgrade has completed. review the upgrade cycle.

19.png

you can re-validate the installation as we did at the start of the upgrade, checking that we have all the green lights on, and the version of components have increased.

VMware HCX on IBM Cloud

I think now goal of every enterprise organization is to be able to accelerate their time to business value for their  customers by taking advantage of cloud and most of the enterprise customer data center are built on VMware platform and while considering cloud adoption , organizations are facing challenges listed as below:

  • Incompatible environments – Currently customers have various VMware vSphere versions deployed in on-premises and if these customers would like to move to the latest build of SDDC based cloud but facing trouble due to differences in versions, networking architectures, server CPUs etc.
  • Network complexity to the cloud – Internet/WAN technologies are complex, non-automated, and inconsistent. Setting up and maintaining VPNs, firewalls, direct connects and routing across network/co-location provider networks, and enterprise networks is not easy.
  • Complex application dependencies – Enterprise applications are complex and these applications interact with various other servers in the datacenter such as storage , databases, other solutions, DMZ, serious security solutions and platform applications. Application dependencies are hard to assess, customers face lots of issues in addressing these issues.

HCX enables enterprise to the over come above challenges. Delivered as a VMware Cloud Service, and available through IBM Cloud, the service enables secure, seamless interoperability and hybridity between any VMware vSphere based clouds, enabling large-scale migration to modern clouds/datacenters, and application mobility, with no application downtime. Built-in DR & security enables unified compliance, governance and control. Automated deployment makes it straightforward for service providers and enterprises to rapidly get to the desired state and shorten time to value.

1.png

HCX on IBM Cloud

HCX on IBM Cloud unlocks the IBM Cloud for on-premises vSphere environments, where you can build an abstraction layer between your on-premises data centers and the IBM Cloud. Once done, networks can be stretched securely across the HCX hybrid interconnect, which enables seamless mobility of virtual machines. HCX enables hybrid capabilities in vCenter so that workloads can be protected and migrated to and from the IBM Cloud.

HCX uses vSphere Replication to provide robust replication capabilities at the hypervisor layer , which is coupled with HCX’s ability to stretch layer 2 network, no intrusion change at the OS or application layers is required. This allows VMs to be easily migrated to and from the IBM Cloud without any change. HCX also optimizes migration traffic with inbuilt WAN optimization.

The HCX solution requires at least one VCF V2.1 or vCenter Server instance running vSphere 6.5 with the HCX on IBM Cloud service installed. This solution also requires the HCX software to be installed on-premises for the vSphere environment, in which case the HCX on IBM Cloud instance must be ordered for licensing and activation of the on-premises HCX installation.

HCX Components

HCX comprised of a Cloud side and a client side install at a minimum. An instance of HCX must be deployed per vCenter, regardless of if the vCenters where HCX is to be deployed are linked in the same SSO domain on the client or cloud side. Site configurations supported by HCX Client to Cloud are; one to one, one to many, many to one and many to many. HCX has the concept of cloud side install and customer side install.

Cloud side = destination (VCF or vCenter as a Service on IBM Cloud). The cloud side install of HCX is the “slave” instance of an HCX client to cloud relationship. It is controlled by the customer-side install.

Customer side = Any vSphere instances(Source). The client side of the HCX install is the master which controls the cloud side slave instance via it’s vCenter web client UI.

HCX Manager

The Cloud side HCX Manager is the first part of an HCX install process , which need to be deployed on the cloud side by the IBM VMware Solutions automatically. Initially it is a single deployed OVA image file specific to the cloud side in conjunction with an NSX edge load balancer-firewall which is configured specifically for the HCX role. The HCX Manager is configured to listen for incoming client-side registration, management and control traffic via the configured NSX edge load balancer / firewall.

The Client side HCX Manager a client-side specific OVA image file which provides the UI functionality for managing and operating HCX. The client side HCX manager is responsible for registration with the cloud side HCX manager and creating a management plane between the client and cloud side. it is also responsible for deploying fleet components on the client side and instructing the cloud side to do the same.

HCX Interconnect Service

The interconnect service provides resilient access over the internet and private lines to the target site while providing strong encryption, traffic engineering and extending the data center. This service simplifies secure pairing of site and management of HCX components.

WAN Optimization – Improves performance characteristics of the private lines or internet paths by leveraging WAN optimization techniques like data de-duplication and line conditioning. This makes performance closer to a LAN environment.

Network Extension Service

High throughput Network Extension service with integrated Proximity Routing which unlocks seamless mobility and simple disaster recovery plans across sites.

HCX other components are responsible for creating the data and control planes between client and cloud side. Deployed as VMs in mirrored pairs, the component consists of the following:

Cloud Gateway: The Cloud Gateway is an optional component which is responsible for creating encrypted tunnels for vMotion and replication traffic.

Layer 2 Concentrator: The Layer 2 Concentrator is an optional component responsible for creating encrypted tunnels for the data and control plane corresponding to stretched layer 2 traffic. Each L2C pair can handle up to 4096 stretched networks. Additional L2C pairs can be deployed as needed.

WAN Optimizer: HCX includes an optionally deployed Silver Peak WAN optimization appliance. It is deployed as a VM appliance. When deployed the CGW tunnel traffic will be redirected to traverse the WAN Opt pair.

Proxy ESX host: Whenever the CGW is configured to connect to the cloud side HCX site, a proxy ESXi host will appear in vCenter outside of any cluster. This ESXi host has the same management and vMotion IP address as the corresponding CGW appliance.

HCX Licenses:

  1. Traffic on 80 and 443 must be allowed to https://connect.hcx.vmware.com
  2. A one-time use registration key will be provided for the client-side install provided via the IBM Cloud VMware Solutions portal. A key is required for each client side HCX installation.
  3. The Cloud side HCX registration is automatically completed by the IBM Cloud HCX deployment automation.

HCX Use Case:

  1. Migrate applications to IBM cloud seamlessly, securely and efficiently.
  2. Minimal need for long migration plans & application dependency mapping.
  3. Secure vMotion, Bulk migration, while keeping same IP/Networks.
  4. Securely Extend Datacenter to the IBM cloud.
  5. Extend networks, IP, Security policies and IT mgmt. to the IBM cloud.
  6. Securely protect – BC/DR via HCX

……………………………………………………………………………………………………………………………………………………….

Install and Configure UMDS 6.5

VMware vSphere Update Manager Download Service (UMDS) is an optional module of Update Manager. For security reasons and deployment restrictions, vSphere, including Update Manager, might be installed in a secured network that is disconnected from other local networks and the Internet. Update Manager requires access to patch information to function properly. If you are using such an environment, you can install UMDS on a computer that has Internet access to download upgrades, patch binaries, and patch metadata, and then export the downloads to a portable media drive or configure an IIS server  so that they become accessible to the Update Manager server.

Pre-requisite

  • Verify that the machine on which you install UMDS has Internet access, so that UMDS can download upgrades, patch metadata and patch binaries.

  • Uninstall older version of UMDS 1.0.x, UMDX 4.x or UMDS 5.x if it is installed on the machine.

  • Create a database instance and configure it before you install UMDS. When you install UMDS on a 64-bit machine, you must configure a 64-bit DSN and test it from ODBC. The database privileges and preparation steps are the same as the ones used for Update Manager.

  • UMDS and Update Manager must be installed on different machines.

Installation and Configuration

Mount the vCenter ISO and open it and double-click the autorun.exe file and select vSphere Update Manager > Download Service.

  • (Optional) : Select the option to Use Microsoft SQL Server 2012 Express as the embedded database, and click Install ( if you have not installed as per-requisite)

2017-11-27 14_27_36-10.139.59.233 - Remote Desktop Connection

2017-11-27 14_29_05-10.139.59.233 - Remote Desktop Connection2017-11-27 14_30_00-10.139.59.233 - Remote Desktop Connection.png

2017-11-27 14_32_35-10.139.59.233 - Remote Desktop Connection
2017-11-27 14_32_49-10.139.59.233 - Remote Desktop Connection
Specify the Update Manager Download Service proxy settings ( if using proxy ) and click Next.
2017-11-27 14_33_22-10.139.59.233 - Remote Desktop Connection.png
Select the Update Manager Download Service installation and patch download directories and click Next.
Patch Download directories should have around 150 GB free space for successful patch download.
2017-11-27 14_34_01-10.139.59.233 - Remote Desktop Connection.png

 In the warning message about the disk free space, click OK.

Click Install to begin the installation.

2017-11-27 14_34_12-10.139.59.233 - Remote Desktop Connection
Click Finish , that completes the installation.

Configuring UMDS

UMDS does not have a GUI interface. All configuration will be done via the command line. To begin configuration, open a command prompt and browse to the directory where UMDS is installed:

In my case it is installed  at C:\Program Files\VMware\Infrastructure\Update Manager>

1.png

# Run command vmware-umds -G  to view the patch store location, the proxy settings and which downloads are enabled. By default it is configured to download host patches for ESX(i) 4 , 5 and 6. i have disabled for 4 and 5.

2.png downloads can be disabled by running:

#vmware-umds -S -d embeddedEsx-5.0.0-INTL

Important commands:

You can enable or disable all host patch downloads you can run:

vmware-umds -S –enable-host
vmware-umds -S –disable-host
vmware-umds -S –enable-host –enable-va
vmware-umds -S –disable-host –disable-va

You can change the patch store location using:

vmware-umds -S –patch-store c:\Patches

New URLs can be added, or existing ones removed by using:

vmware-umds -S –add-url
vmware-umds -S –remove-url

To start downloading patches you can us:

vmware-umds -D

this command re-downloads the patches between set times to reduce load during core business hours.

vmware-umds -R –start-time 2010-11-01T00:00:00 –end-time 2010-11-30T23:59:59

Once you have your patches downloaded, the next step is to how to make them available to Update Manager. There are a couple of ways you can do this depending on your environment. If you wish to export all the downloaded patches to an external drive, for transfer to the Update Manager server, you can do so by running, for example:

vmware-umds -E –export-store e:\patchs  and then zip it or another way is to configure IIS web server and publish the patch location.

For IIS , Select Add Roles and Features in your Windows 2012 Server Manager and select the Web Server (IIS) checkbox.

After IIS installation is completed start Internet Information Services (IIS) Manager which is located in the Administrative Tools.Right click Default Web Site and select Add Virtual Directory , Choose an Alias for the Virtual Directory and select the patch store location as physical path.

3.png

In the MIME Types you need to add .vib and .sig as application/octet-stream type. Finally you need to enable Directory Browsing on the Virtual Directory.

4

and to start using this shared repository with the Update Manager, login to the vSphere Web Client go to Update Manager  then to go Admin View – Settings – Download settings and click Edit. Select Use a shared repository and enter the URL (http://IP address/FQDN/VIRTUAL_DIRECTORY).

5.png

After clicking OK it will validate the repository and download the metadata of the downloaded patches and then follow your existing process like creation of baselines  , attach the base lines and patch your hosts.

 

 

 

 

 

 

Deploy VMware Cloud Foundation on IBM Cloud – Part -1

IBM Cloud for VMware Solutions enables you to quickly and seamlessly integrate or migrate your on-premises VMware workloads to the IBM Cloud by using the scalable, secure, and high-performance IBM Cloud infrastructure and the industry-leading VMware hybrid virtualization technology.

IBM Cloud for VMware Solutions allows you to easily deploy your VMware virtual environments and manage the infrastructure resources on IBM Cloud. At the same time, you can still use your familiar native VMware product console to manage the VMware workloads.

Deployment Options:

IBM Cloud for VMware Solutions provides standardized and customizable deployment choices of VMware virtual environments. The following deployment types are offered:

  • VMware Cloud Foundation on IBM Cloud: The Cloud Foundation offering provides a unified VMware virtual environment by using standard IBM Cloud compute, storage, and network resources that are dedicated to each user deployment.
  • VMware vCenter Server on IBM Cloud: The vCenter Server offering allows you to deploy a VMware virtual environment by using custom compute, storage, and network resources to best fit your business needs.
  • VMware vSphere on IBM Cloud: The vSphere on IBM Cloud offering provides a customizable virtualization service that combines VMware-compatible bare metal servers, hardware components, and licenses, to build your own IBM-hosted VMware environment.

To use IBM Cloud for VMware Solutions to order instances, you must have an IBM Cloud account. The cost of the components that are ordered in your instances is billed to that IBM Cloud account.

When you order VMware Cloud Foundation on IBM Cloud, an entire VMware environment is deployed automatically. The base deployment consists of four IBM Cloud Bare Metal Servers with the VMware Cloud Foundation stack pre-installed and configured to provide unified software-defined data center (SDDC) platform. Cloud Foundation natively integrates VMware vSphere, VMware NSX, VMware Virtual SAN, and This has been architected based on VMware-validated designs.

The following image depicts the overall architecture and components of the Cloud Foundation deployment.

1.png

Physical infrastructure

Physical Infrastructure provides the physical compute, storage, and network resources to be used by the virtual infrastructure.

Virtualization infrastructure (Compute, Storage, and Network)

Virtualization infrastructure layer virtualizes the physical infrastructure through different VMware products:

  • VMware vSphere virtualizes the physical compute resources.
  • vSAN provides software-defined shared storage based on the storage in the physical servers.
  • VMware NSX is the network virtualization platform that provides logical networking components and virtual networks.

Virtualization Management

Virtualization Management consists of vCenter Server, which represents the management layer for the virtualized environment. The same familiar vSphere API-compatible tools and scripts can be used to manage the IBM hosted VMware environment.

On the IBM Cloud for VMware Solutions console, you can expand and contract the capacity of your instances using the add and remove ESXi server capability. In addition, lifecycle management functions like applying updates and upgrading the VMware components in the hosted environment are also available.

Cloud Foundation (VCF) Technical specifications

Hardware (options)

  • Small (Dual Intel Xeon E5-2650 v4 / 24 cores total, 2.20 GHz / 128 GB RAM)
  • Large (Dual Intel Xeon E5-2690 v4 / 28 cores total, 2.60 GHz / 512 GB RAM)
  • User customized (CPU model and RAM) ( up to 1.5 TB of Memory)

Networking

  • 10 Gbps dual public and private network uplinks
  • Three VLANs: one public and two private
  • Secure VMware NSX Edge Services Gateway

VSIs ( Virtual Server Instances)

  • One VSI for Windows AD and DNS services

Storage:

  • Small option: Two 1.9 TB SSD capacity disks
  • Large option: Four 3.8 TB SSD capacity disks
  • User customized option
    • Storage disk: 1.9 or 3.8 TB SSD
    • Disk quantity: 2, 4, 6, or 8
  • Included in all storage options
    • Two 1 TB SATA boot disks
    • Two 960 GB SSD cache disks
    • One RAID disk controller
  • One 2 TB shared block storage for backups that can be scaled up to 12 TB (you can choose whether you want the storage by selecting or deselecting the Veeam on IBM Cloud service)

IBM-provided VMware licenses or bring your own licenses (BYOL)

  • VMware vSphere Enterprise Plus 6.5u1
  • VMware vCenter Server 6.5
  • VMware NSX Enterprise 6.3
  • VMware vSAN (Advanced or Enterprise) 6.6
  • SDDC Manager licenses
  • Support and Services fee (one license per node)

As you must be aware that VCF has strict requirements on the physical infrastructure. That’s the reason, you can deploy instances only in IBM Cloud Data Centers that meet the requirements. VCF can be deployed in these cities data centers – Amsterdam, Chennai, Dallas, Frankfurt, HongKong, London, Melbourne, Queretaro, Milan, Montreal, Oslo, Paris, Sao Paulo, Seoul, San Jose, Singapore, Sydney, Tokyo, Toronto, Washington.

When you order a VCF instance, you can also order additional services on top of VCF

Veeam on IBM Cloud

This service helps you manage the backup and restore of all the virtual machines (VMs) in your environment, including the backup and restore of the management components.

F5 on IBM Cloud

This service optimizes performance and ensures availability and security for applications with the F5 BIG-IP Virtual Edition (VE).

FortiGate Security Appliance on IBM Cloud

This service deploys an HA-pair of FortiGate Security Appliance (FSA) 300 series devices that can provide firewall, routing, NAT, and VPN services to protect the public network connection to your environment.

Managed Services

These services enable IBM Integrated Managed Infrastructure (IMI) to deliver dynamic remote management services for a broad range of cloud infrastructures.

Zerto on IBM Cloud

This service provides replication and disaster recovery capabilities to help protect your workloads.

This post covers some basic information about VMware and VCF options available to IBM Cloud , in the next post i will deploy fully automated VCF , till then Happy learning 🙂

NSX-T 2.0 – Host Preparation

A fabric node is a node that has been registered with the NSX-T management plane and has NSX-T modules installed. For a hypervisor host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric.

As we know NSX-T is vCenter agnostic, the Host Switch is configured from the NSX manager UI. The NSX manager owns the life-cycle of the Host Switch and the Logical Switch creation on these Host Switches.

  • Go to Fabric – > Nodes and check the Hosts tab and Click on ADD

1

  • Enter Name of the Host , IP address of the host , choose OS tpye , since in this exercise i am adding an ESXi host , so chosen “ESXi” , Enter “root” credentials and most important enter the thumb print and to get the thumb print enter below command on ESXi command prompt – # openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout

6

  • Click Save.

2

  • If you do not enter the host thumbprint, the NSX-T UI prompts you to use the default thumbprint in the plain text format retrieved from the host.

3

Monitor the progress , it will install NSX binaries on Hosts.

  • Since i am deploying in my Home lab and my ESXi Host was having 12 GB RAM and installation failed because minimum RAM requirement is 16GB.

4

  • and finally vibs successfully installed.

5

Now lets add the host in to Management Plane

Joining the hypervisor hosts with the management plane ensures that the NSX Manager and the hosts can communicate with each other.

  • Open an SSH session to the NSX Manager appliance and  Log in with the Administrator credentials.On the NSX Manager appliance, run the get certificate api thumbprint command.  “The command output is a string of numbers that is unique to this NSX Manager.”  Copy this String

7

  • Now Open an SSH session to the hypervisor host and run the join management-plane command.

8

  • Provide the following information:
    •  Hostname or IP address of the NSX Manager with an optional port number
    • Username of the NSX Manager
    • Certificate thumbprint of the NSX Manager
    • Password of the NSX Manager

9

  • Command will prompt for Password for API user: <NSX-Manager’s-password> and if everything is fine then you should get “Node successfully joined”

1.png

Next post i will be targeting to prepare KVM host , till then Happy Learning 🙂

NSX-T 2.0 – Deploying NSX Controller

NSX controllers controls virtual networks and overlay transport tunnels. NSX Controllers are deployed as a cluster of highly available vApps that are responsible for the automated deployment of virtual networks across the entire NSX-T architecture and  to achieve high availability of control plane the NSX Controllers are deployed in a cluster of three instances.

Deploying the NSX Controller is almost similar to deploying the NSX Manager appliance. I’ve deployed one NSX Controllers onto the same network as the NSX Manager.

Let’s Start Deploying Controller:

1 – First login to ESXi using HTML5 Client and run the New Virtual Machine wizard and select “Deploy a virtual machine from an OVF or OVA file”

 

C1

2 – Enter the name for the NSX-T Controller appliance (nsxc.avnlab.com) and choose NSX-T controller appliance OVA (nsx-controller-2.0.0.0.0.6522091.ova)

c2

3 – Choose your storage

c3

4 – Choose your Network and disk type

c4

5 – Enter various password for “Root User” , “admin User” and “audit User” , we need to setup a complex password.Password complexity requirements are as below:

  • At least eight characters
  • At least one lower-case letter
  • At least one upper-case letter
  • At least one digit
  • At least one special character
  • At least five different characters
  • No dictionary words
  • No palindromes

c5

6 – in the same window Enter the “Host Name”, Default Gateway, IP address and Other network related information.

c6

7 – Review the configuration and click Finish

c7

8 – it will take some time and once ova import completes  , you are done.

c8

Validate Controller Network Configuration as below:

1.png

Connect Controller Cluster to NSX Manager

Get NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use get certificate api thumbprint to get the SSL certificate thumbprint. Copy the output to use later
  3. 2.png

Join NSX Controller to NSX Manager

 

  1. Log on to the NSX Controllers via SSH using the admin credentials.
  2. Use join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>
  3. 3
  4. command will prompt for “admin” password.
  5. Once entered , wait for some time , if all goes well then you will see below “successful” message .
  6. 4
  7. Enter command on NSX Controller as “get managers” to view the connection to Manager
  8. 5
  9. On NSX manager run command “get nodes” to view registration is successful..
  10. 6.png
  11. From the command line using command “get management-cluster status ” you can see the details and their status  –  Controllers is listed in the control cluster, but the cluster status is “UNSTABLE”.
  12. 7.png

Configure the Controller Cluster:

To configure the Controller cluster you need to log on to any of the Controllers and initialise the cluster. Since this i am deploying in my Home Lab and have a single cluster , so let’s login to Controller and initialize the the cluster.

 

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use set control-cluster security-model shared-secret to configure the shared secret
  3. 8.png
  4. after the secret is configured, use initialize control-cluster to promote this node as Master controller
  5. 9
  6. Now , if you see “Control Cluster Status” is “Stable”
  7. 10.png
  8. you can also view in GUI console also..
  9. 11.png

so now we have completed NSX manager installation , NSX controller installation and their integration.

Happy Learning 🙂

 

 

NSX-T 2.0 – Deploying NSX Manager

NSX-T has been decoupled from vCenter and is availabe for multiple platforms and since lots of things are happening around SDN, their integration with Micro Services running on containers and kubernetes , so it is right to start getting familiar with NSX-T before customer start adopting it.

So first step towards learning NSX-T is to read through official documentation and parallelly start deploying NSX-T 2.0 components…here is the deployment flow that i will follow:

  1. Install NSX Manager.
  2. Install NSX Controllers.
  3. Join NSX Controllers with the management plane.
  4. Initialize the control cluster to create a master controller.
  5. Join NSX Controllers into a control cluster. NSX Manager installs NSX-T modules after the hypervisor hosts are added.
  6. Join hypervisor hosts with the management plane. This causes the host to send its host certificate to the management plane.
  7. Install NSX Edges.
  8. Join NSX Edges with the management plane.
  9. Create transport zones and transport nodes.

So let’s move towards our first step,  deploy NSX Manager….

Here is the System Requirements for NSX-T Manager :
NSX-T has specific requirements regarding hardware resources and software versions. 1.png

for My Lab I am going to use Nested ESXi6.5.

1 – First login to ESXi using HTML5 Client and run the New Virtual Machine wizard and select “Deploy a virtual machine from an OVF or OVA file”

C1.png

Enter the name for the NSX-T Manager appliance (nsxtm.avnlab.com) and choose NSX-T Manager appliance OVA (nsx-unified-appliance-2.0.0.0.0.6522097)

1.png

Select your storage

2.png

Choose your network , Disk type

3.png

Enter various password for “Root User” , “admin User” and “audit User” , we need to setup a complex password , else it will ask to enter new password post deployment.

NOTE – NSX-T  core services on the appliance will not start until a password with sufficient complexity is set.

Password complexity requirement:

At least eight characters
At least one lower-case letter
At least one upper-case letter
At least one digit
At least one special character
At least five different characters
No dictionary words
No palindromes

4.png

in the same window Enter the “Hostname” , Rolename will be “nsx-manager” don’t change it.

5.png

Enter DNS and NTP server details , enable SSH and root SSH login ( if you required)

6.png

and click finish

7.png

it will take some time and once ova import completes  , you are done.

8.png

Login with IP address that we specified in the installation and Here is your first NSX-T login screen…login with user name “admin” and password that we specified in above steps.11.png

Accept “End User License Agreement” and click on “Continue”

10.png

and here is your first successful NSX-T installation 🙂 Finally, the NSX Manager is deployed.

12.png

Happy Learning 🙂

 

 

 

 

VCAP6-DCV Deploy Exam Experience

 

VMW-LGO-CERT-ADV-PRO-6-DATA-CTR-VIRT-DEPLOY-K.jpg

Took the exam VMware Certified Advanced Professional 6 — Data Center Virtualization Deploy exam on 30th August 2017  and really happy to share that i have passed this exam too.

Few tips for everyone who are preparing for the exam:

Before starting this exam stay relax as you will have to spend lots of time understanding questions and performing tasks as per understanding, so stay relaxed and use all your experaicnce on vSphere , vSAN  , vSphere replication and VDP.This is the exam for the people who works on these product on daily basis and understands each and every aspect of vSphere , HA , DRS , vMotion , dvSwtich , some basic understanding of PowerCLI  , troubleshooting , esxi command lines etc..  , I would suggest to follow blueprint and practice as much as you can and practice will lead to you towards successes. To get the feel of exam , i would highly recommend everyone to visit Hands on Labs and test out few labs which will give you a brief idea on how to deal with the desktop during the actual exam.

When writing VCAP6-DCV Deploy exam every seconds count , so I would suggest spend time reading the questions thoroughly before starting the task it’s going to be beneficial at-least it was in my case because I was able to save a lot of time when doing the tasks.

Be aware that CRTL, ALT and BACKSPACE are not working go back using arrow keys and then press delete it’s better to make use of on screen keyboard if you wish to copy and paste.

Best of luck to everyone who are preparing for the VCAP Deploy exam!!!

Upgrade to vRealize Operations 6.6

First thing  before we start upgrading , we must download vRealize Operations Manager virtual appliance updates for both the operating system and the virtual appliance. You can do this from the same page on where you download vRealize Operations 6.6. Both of these are PAK files that will have file names similar to- vRealize_Operations_Manager-VA-OS-6.6.0.5707160.pak    and vRealize_Operations_Manager-VA-6.6.0.5707160.pak

62.png

Once Download is finished , Follow the steps as below:

Login to the vROps admin interface at – https://<master-node-FQDN-or-IP-address>/admin and Take the cluster offline.

2.png

Cl3.png

Once the Custer is Offline, Start the upgrade process by clicking on Software Upgrade and Click on “Install a Software Update”

5

Click browse and select PAK file , there are two files that we need to download from VMware website , one is OS upgrade and another one is application upgrade.

61

let’s first upgrade OS , choose OS upgrade .pak file and Click on Upload.

63Check the version.

7.png

Accept the License , check the Update information

8

and Finally Click on Install

9

Monitor the installation process.10.png

Since in my Lab i have many appliances and it took almost half an hour to upgrade the OS of all the appliances, will reboot all the appliances and comeback , once done with OS upgrade ,

Lets start with application upgrade.

Please follow the same series of steps, shown above but provide the virtual appliance update PAK file this time. Click Upload.

11.png

Check if we have choosen the right file to upgrade.

12.png

Click install in the Last step and finally monitor the upgrade progress…

13.png

After some time, the virtual appliance will restart. Once restarted you should log back in to the admin console and here it is all new HTML5 vRealize Operation Manager.

14.png

and here is the version information..

16.png

that’s it , Happy learning 🙂

 

NSX Controllers ?

In an NSX for vSphere environment, basically the management plane is responsible for providing the GUI interface and the REST API entry point to manage the NSX environment.

Control Plane

The control plane includes a three node cluster running the control plane protocols required to capture the system configuration and push it down to the data plane and data plane consists of VIB modules installed in the hypervisor during host preparation.

NSX Controller stores the following types of tables:

  • VTEP table -keeps track of what virtual network (VNI) is present on which VTEP/hypervisor.
  • MAC table – keeps track of VM MAC to VTEP IP mappings.
  • ARP table – keeps track of VM IP to VM MAC mappings.

Controllers maintain the routing information by distributing the routing data learned from the control VM to each routing kernel module in the ESXi hosts. The use of the controller cluster eliminates the need for multicast support from the physical network infrastructure. Customers no longer have to provision multicast group IP addresses.  They also no longer need to enable PIM routing or IGMP snooping features on physical switches or routers. Logical switches need to be configured in unicast mode to avail of this feature.

NSX Controllers support an ARP suppression mechanism that reduces the need to flood ARP broadcast requests across the L2 network domain where virtual machines are connected. This is achieved by converting the ARP broadcasts into Controller lookups. If the controller lookup fails, then normal flooding will be used.

The ESXi host, with NSX Virtual Switch, intercepts the following types of traffic:

  • Virtual machine broadcast
  • Virtual machine unicast
  • Virtual machine multicast
  • Ethernet requests
  • Queries to the NSX Controller instance to retrieve the correct response to those requests

Each controller node is assigned a set of roles that define the tasks it can implement. By default, each controller is assigned all the following roles:

  • API Provider:  Handles HTTP requests from NSX Manager
  • Persistence Server: Persistently stores network state information
  • Logical Manager: Computes policies and network topology
  • Switch Manager: Manages the hypervisors and pushes configuration to the hosts
  • Directory Server: Manages VXLAN and distributed logical routing information

One of the controller nodes is elected as a leader for each role.so may be controller 1 elected as the leader for the API Provider and Logical Manager.controller 2 as the leader for Persistence Server and Directory Server and controller 3 has been elected as the leader for the Switch Manager role.

The leader for each role is responsible for allocating tasks to all individual nodes in the cluster. This is called slicing and slicing is being used to increase the scalability characteristics of the NSX architecture , slicing ensure that all the controller nodes can be active at any given time

123

The leader of each role maintains a sharding db table to keep track of the workload. The sharding db table is calculated by the leader and replicated to every controller node. It is used by both VXLAN and distributed logical router, known as DLR. The sharding db table may be recalculated at cluster membership changes, role master changes, or adjusted periodically for rebalancing.

In case of the failure of a controller node, the slices for a given role that were owned by the failed node are reassigned to the remaining members of the cluster.  Node failure triggers a new leader election for the roles originally led by the failed node.

Control Plane Interaction

  • ESXi hosts and NSX logical router virtual machines learn network information and send it to NSX Controller through UWA.
  • The NSX Controller CLI provides a consistent interface to verify VXLAN and logical routing network state information.
  • NSX Manager also provides APIs to programmatically retrieve data from the NSX Controller nodes in future.

7

Controller Internal Communication

The Management Plane communicates to the Controller Cluster over TCP/443.The Management Plane communicates directly with the vsfwd agent in the ESXi host over TCP/5671 using RabbitMQ, to push down firewall configuration changes.

The controllers communicates to the netcpa agent running in the ESXi host over TCP/1234 to propagate L2 and L3 changes. Netcpa then internally propagates these changes to the respective routing and VXLAN kernel modules in the ESXi host. Netcpa also acts as a middleman between the vsfwd agent and the ESXi kernel modules.

NSX Manager chooses a single controller node to start a REST API call. Once the connection is established, the NSX Manager transmits the host certificate thumbprint, VNI and logical interface information to the NSX Controller Cluster.

All the date transmitted by NSX Manager can be found in the file config-by-vsm.xml in the directory /etc/vmware/netcpa on the ESXi host. File /var/log/netcpa.log, can be helpful in troubleshooting the communication path between the NSX Manager, vsfwd and netcpa.

Netcpa randomly chooses a controller to establish the initial connection that is called core session and thsi core session is used to transmit the Controller Sharding table to the hosts, so they are aware of who is responsible for a particular VNI or routing instance.

5

Hope this helps you in understanding NSX Controllers.Happy Learning 🙂

NSX6.3 – Finally NSX supports vSphere 6.5

VMware released both NSX for vSphere 6.3.o as well as vCenter 6.5a and ESXi 6.5a , we must ensure that before upgrade your NSX environment to 6.3 you would need to upgrade your vCenter and ESXi hosts to 6.5a as describe in the KB 2148841.

This version has introduced a lot of great new features and enhancements with just a few standouts listed below which appealed to me :

Controller Disconnected Operation (CDO) mode: A new feature called Controller Disconnected Operation (CDO) mode has been introduced. This mode ensures that data plane connectivity is unaffected when hosts lose connectivity with the controller.(i have seen few customers were affected due to this).

Controller Disconnected Operation (CDO) mode ensures that the data plane connectivity is unaffected when host lose connectivity with the controller. If you find issues with controller, you can enable CDO mode to avoid temporary connectivity issues with the controller.

You can enable CDO mode for each host cluster through transport zone. CDO mode is disabled by default.

When CDO mode is enabled, NSX Manager creates a special CDO logical switch, one for every transport zone. VXLAN Network Identifier (VNI) of the special CDO logical switch is unique from all other logical switches. When CDO mode is enabled, one controller in the cluster is responsible for collecting all the VTEPs information reported from all transport nodes, and replicate the updated VTEP information to all other transport nodes. When a controller fails, a new controller is elected as the new master to take the responsibility and all transport nodes connected to original master are migrated to the new master and data is synced between the transport nodes and the controllers.

If you add new cluster to transport zone, NSX Manager pushes the CDO mode and VNI to the newly added hosts. If you remove the cluster, NSX Manager removes the VNI data from the hosts.

When you disable the CDO mode on transport zone, NSX Manager removes CDO logical switch.

In Cross-vCenter NSX environment, you can enable CDO mode only on the local transport zones or in a topology where you do not have any local transport zones and have a single universal transport zone for the primary NSX Manager. The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers.

You can enable CDO mode on local transport zone for secondary NSX Manager.

Primary NSX Manager: You can enable CDO mode only on transport zones that do not share the same distributed virtual switch. If the universal transport zone and the local transport zones share the same distributed virtual switch, then CDO mode can be enabled only on the universal transport zone.

Secondary NSX Manager: The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers. You can enable CDO mode on local transport zones, if they do not share the same distributed virtual switch.

Cross-vCenter NSX Active-Standby DFW Enhancements:

  • Multiple Universal DFW sections are now supported. Both Universal and Local rules can consume Universal security groups in source/destination/AppliedTo fields.
  • Universal security groups: Universal Security Group membership can be defined in a static or dynamic manner. Static membership is achieved by manually adding a universal security tag to each VM. Dynamic membership is achieved by adding VMs as members based on dynamic criteria (VM name).
  • Universal Security tags: You can now define Universal Security tags on the primary NSX Manager and mark for universal synchronization with secondary NSX Managers. Universal Security tags can be assigned to VMs statically, based on unique ID selection, or dynamically, in response to criteria such as antivirus or vulnerability scans.
  • Unique ID Selection Criteria: In earlier releases of NSX, security tags are local to a NSX Manager, and are mapped to VMs using the VM’s managed object ID. In an active-standby environment, the managed object ID for a given VM might not be the same in the active and standby datacenters. NSX 6.3.x allows you to configure a Unique ID Selection Criteria on the primary NSX Manager to use to identify VMs when attaching to universal security tags: VM instance UUID, VM BIOS UUID, VM name, or a combination of these options.

    • Unique ID Selection Criteria
      •  ■

        Use Virtual Machine instance UUID (recommended) – The VM instance UUID is generally unique within a VC domain, however there are exceptions such as when deployments are made through snapshots. If the VM instance UUID is not unique we recommend you use the VM BIOS UUID in combination with the VM name.

        Use Virtual Machine BIOS UUID – The BIOS UUID is not guaranteed to be unique within a VC domain, but it is always preserved in case of disaster. We recommend you use BIOS UUID in combination with VM name.

        Use Virtual Machine Name – If all of the VM names in an environment are unique, then VM name can be used to identify a VM across vCenters. We recommend you use VM name in combinationwith VM BIOS UUID.

 

 

Control Plane Agent (netcpa) Auto-recovery: An enhanced auto-recovery mechanism for the netcpa process ensures continuous data path communication. The automatic netcpa monitoring process also auto-restarts in case of any problems and provides alerts through the syslog server. A summary of benefits:

  • automatic netcpa process monitoring.
  • process auto-restart in case of problems, for example, if the system hangs.
  • automatic core file generation for debugging.
  • alert via syslog of the automatic restart event.

Log Insight Content Pack: This has been updated for Load Balancer to provide a centralized Dashboard, end-to-end monitoring, and better capacity planning from the user interface (UI).

Drain state for Load Balancer pool members: You can now put a pool member into Drain state, which forces the server to shutdown gracefully for maintenance. Setting a pool member to drain state removes the backend server from load balancing, but still allows the server to accept new, persistent connections.

4-byte ASN support for BGP: BGP configuration with 4-byte ASN support is made available along with backward compatibility for the pre-existing 2-byte ASN BGP peers.

 

Improved Layer 2 VPN performance: Performance for Layer 2 VPN has been improved. This allows a single Edge appliance to support up to 1.5 Gb/s throughput, which is an improvement from the previous 750 Mb/s.

Improved Configurability for OSPF: While configuring OSPF on an Edge Services Gateway (ESG), NSSA can translate all Type-7 LSAs to Type-5 LSAs.

DFW timers: NSX 6.3.0 introduces Session Timers that define how long a session is maintained on the firewall after inactivity. When the session timeout for the protocol expires, the session closes. On the firewall, you can define timeouts for TCP, UDP, and ICMP sessions and apply them to a user defined set of VMs or vNICs.

Linux support for Guest Introspection: NSX 6.3.0 enables Guest Introspection for Linux VMs. On Linux-based guest VMs, NSX Guest Introspection feature leverages fanotify and inotify capability provided by the Linux Kernel.

Publish Status for Service Composer: Service Composer publish status is now available to check whether a policy is synchronized. This provides increased visibility of security policy translations into DFW rules on the host.

NSX kernel modules now independent of ESXi version: Starting in NSX 6.3.0, NSX kernel modules use only the publicly available VMKAPI so that the interfaces are guaranteed across releases. This enhancement helps reduce the chance of host upgrades failing due to incorrect kernel module versions. In earlier releases, every ESXi upgrade in an NSX environment required at least two reboots to make sure the NSX functionality continued to work (due to having to push new kernel modules for every new ESXi version).

Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded to NSX 6.3.0, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This includes the NSX 6.3.x VIB install that is required after an ESXi upgrade, any NSX 6.3.x VIB uninstall, and future NSX 6.3.0 to NSX 6.3.x upgrades. Upgrading from NSX versions earlier than 6.3.0 to NSX 6.3.x still requires that you reboot hosts to complete the upgrade. Upgrading NSX 6.3.x on vSphere 5.5 still requires that you reboot hosts to complete the upgrade.

Happy learning 🙂

 

 

 

VMware NSX for vSphere 6.2.5 Released

VMware has released NSX for vSphere 6.2.5, this is a maintenance release that contains the bug fixes

The 6.2.5 release is a bugfix release that addresses a loss of network connectivity after performing certain vMotion operations in installations where some hosts ran a newer version of NSX, and other hosts ran an older version of NSX.

Details of fixes can be checked Here

But remember still vSphere 6.5 is not supported !!!!!!!!!!

QoS Tagging and Traffic Filtering on vDS

There are two types of QoS Marking/Tagging common in networking are 802.1p (COS) applied on Ethernet(Layer 2) packets and Differentiated Service Code Point (DSCP) Marking applied on IP packets. The physical network devices use these tags to identify important traffic types and provide Quality of Service based on the value of the tag. As business critical and latency sensitive applications are virtualized and run in parallel with other applications on ESXi hosts, it is important to enable traffic management and tagging features on the VDS.

The traffic management feature on the vDS helps reserve bandwidth for important traffic types, and the tagging feature allows the external physical network to understand the level of importance of each traffic type. It is a best practice to tag the traffic near the source to help achieve end-to-end Quality of Service (QoS). During network congestion scenarios, the tagged traffic doesn’t get dropped which translates to a higher Quality of Service (QoS) for the tagged traffic.

Once the packets are classified based on the qualifiers described in the traffic filtering section, users can choose to perform Ethernet (layer2) or IP (layer 3) header level marking. The markings can be configured at the port group level.

Lets Configure this , so that i can ensure that my business critical VMs are getting higher priority at physical layer…

Login to the Web Client and  click on dvSwitch and choose Port Group on which you want to apply TAG:

  1. Click on Manage tab
  2. Select the Settings option
  3. Select Policies
  4. Click on Edit

Qos1.png

  1. Click on Traffic filtering and marking
  2. In the Status drop down box choose Enabled
  3. Click the Green + to add a New Network Traffic Rule

qos2

  1. In the Action: drop down box select Tag (default)
  2. Check the box to the right of DSCP value
  3. In the drop down box for the DSCP value select Maximum 63
  4. In the Traffic direction drop down box select Ingress
  5. Click the Green +

qos3

New Network Traffic Rule – Qualifier

Now that you have decided to tag the traffic the next question is which traffic you would like to tag.There are three options available while defining the qualifier:

  • System Traffic Qualifier
  • New MAC qualifier
  • New IP Qualifier

This means you have options to select packets based on system traffic types, MAC header or IP header fields. here we will create qualifier based on system traffic.

Select New System Traffic Qualifier from the drop down menu qos4.gif

  1. Select Virtual Machine
  2. Click OK

qos5

Check that your rule matches Name: Network Traffic Rule 1

  1. Action: Tag
  2. DSCP Value: Checked
  3. DSCP Value: 63 Traffic
  4. Direction: Ingress
  5. System traffic: Virtual Machine
  6. Click OK

qos6

Same way you can also allow/block the traffic:

Again go to dvPort – Settings – Policies –  Edit – Traffic Filtering and Marking and edit the existing rule that we have created above and change Action to Drop.

qos7

  1. Click the Green + to add a new qualifier.
  2. Select New IP Qualifier… from the drop down list.

qos8

  1. Select ICMP from the Protocol drop down menu.
  2.  Select Source address IP address is of your VM , in my case it is 192.168.100.90
  3. Click OK

qos9

  1. and finally Click OK , this will drop the ping for that particular VM.

So same way you can write many rules with various permutation and combinations to help your organisation to achive QoS and traffic filtering on dVS. I hope this helps you in your environments. Happy Learning 🙂

Learn NSX – Part-13 (Configure OSPF for DLR)

Configuring OSPF on a logical router enables VM connectivity across logical routers and from logical routers to edge services gateways (ESGs).

OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal cost.An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing tables. and an area is a logical collection of OSPF networks, routers, and links that have the same area identification. basically areas are identified by an Area ID.

Before we proceed with OSPF configuration , first on our deployed DLR, a Router ID must be configured , a to configure Router ID:

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3 – Double-click the distributed logical router on which to configure OSPF.

1.gif

4 – On the Manage tab:

  1. Select the Routing
  2. Select Global Configuration section from the options on the left.
  3. Under the Dynamic Routing Configuration section, click Edit.

2.gif

5 – In the Edit Dynamic Routing Configuration dialog box:

  1. Select an interface from the Router ID drop-down menu to use as the OSPF Router ID.
  2. Select the Enable Logging check box.
  3. Select Info from the Log Level drop-down menu.
  4. Click OK.

4

6 – Click Publish Changes.

5

Now you have configured Router ID , now let’s configure OSPF:

1 – Select the OSPF section from the options on the left:

  1. Under the Area Definitions section, click the green + icon to add an OSPF area.

6

2 – In the New Area Definition dialog box:

  1. Enter the OSPF area ID in the Area ID text box.
  2. Select the required OSPF area type from the Type drop-down menu:
    1. Normal
    2. NSSA, which prevents the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
    3. Select the required authentication type from the Authentication drop-down menu and enter a password, this is optional.
    4. Click OK.

7

3 – Under the Area to Interface Mapping section, click the green + icon.

8

4 – In the New Area to Interface Mapping dialog box:

  1. Select an appropriate uplink interface in the Interface drop-down menu.
  2. Select the OSPF area from the Area drop-down menu.
  3. Enter 1 in the Hello Interval text box.
  4. Enter 3 in the Dead Interval text box.
  5. Enter 128 in the Priority text box.
  6. Enter 1 in the Cost text box.
  7. Click OK.

9

5 – Under the OSPF Configuration section, click Edit.

  1. In the OSPF Configuration dialog box:
  2. Select the Enable OSPF check box.
  3. Enter the OSPF protocol address in the Protocol Address text box.
  4. Enter the OSPF forwarding address in the Forwarding Address text box.
  5. Select the Enable Graceful Restart check box for packet forwarding to be uninterrupted during restart of OSPF services.
  6. Select the Enable Default Originate check box to allow the NSX Edge to advertise itself as a default gateway to its peers .(optional)
  7. Click OK.

1010-1

6 – Click Publish Changes.

11

7 – Select the Route Redistribution section from the options on the left:

  1. Click Edit.
  2. Select OSPF.
  3. Click OK.

12.gif

12-1

8 – Under the Route Redistribution table section, click the green + In the New Redistribution criteria dialog box:

  1. Select the IP prefixes from the Prefix Name drop-down menu
  2. Select OSPF from the Learner Protocol drop-down menu.
  3. Select the necessary appropriate Allow Learning From
  4. Select Permit from the Action drop-down menu.
  5. Click OK.

13

9 – Click Publish Changes.

14

Our Lab topology will be as below , in my other posts , where i have created Logical switches , add one interface to this DLR as per below topology , this will allow VMs connected to both the logical switches will now be able to talk to each other because the logical router’s connected routes (172.16.10.0/24 and 172.16.20.0/24) are advertised into OSPF.

15

 

VMFS 6

New vSphere 6 .5  has been release with New File system after VMFS 5 released in 2011. Good news is that VMFS6 supports 512 Devices and 2048 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon.

Automatic Space Reclamation is the next feature that many of customers/administrators have been waiting for. basically this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. using Unmap storage capacity is reclaimed and released to the array so that when needed other volumes can use those blocks. Previously we need to run Unmap command to reclaim  the blocks, now this has been integrated in the UI and can be turned on or off from GUI.

SESparse will be the snapshot format supported on VMFS6, vSphere will not be supporting VMFSparse snapshot format in VMFS6, it will continue to be supported on VMFS5. Both VMFS 6 and VMFS 5 can co-exist but there is no straight forward  upgrade from VMFS5 to VMFS6. After you upgrade your ESXi hosts to version 6.5, you can continue using any existing VMFS5 datastores. To take advantage of VMFS6 features, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6 datastore. You cannot upgrade the VMFS5 datastore to VMFS6. right now SE Sparse is used primarily for View and for LUNs larger than 2TB. with VMFS 6 the default will be SE Sparse.

Comparison is as below:

Features and Functionalities VMFS5 VMFS6
Access for ESXi 6.5 hosts Yes Yes
Access for ESXi hosts version 6.0 and earlier Yes No
Datastores per Host 1024 1024
512n storage devices Yes (default) Yes
512e storage devices Yes. Not supported on local 512e devices. Yes (default)
Automatic space reclamation No Yes
Manual space reclamation through the esxcli command. Yes Yes
Space reclamation form guest OS Limited Yes
GPT storage device partitioning Yes Yes
MBR storage device partitioning Yes

For a VMFS5 data store that has been previously upgraded from VMFS3.

No
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes
Support of small files of 1 KB Yes Yes
Default use of ATS-only locking mechanisms on storage devices that support ATS. Yes Yes
Block size Standard 1 MB Standard 1 MB
Default snapshots VMFSsparse for virtual disks smaller than 2 TB.

SEsparse for virtual disks larger than 2 TB.

SEsparse
Virtual disk emulation type 512n 512n
vMotion Yes Yes
Storage vMotion across different datastore types Yes Yes
High Availability and Fault Tolerance Yes Yes
DRS and Storage DRS Yes Yes
RDM Yes Yes

Happy Learning 🙂