Upgrade to vRealize Operations 6.6

First thing  before we start upgrading , we must download vRealize Operations Manager virtual appliance updates for both the operating system and the virtual appliance. You can do this from the same page on where you download vRealize Operations 6.6. Both of these are PAK files that will have file names similar to- vRealize_Operations_Manager-VA-OS-    and vRealize_Operations_Manager-VA-


Once Download is finished , Follow the steps as below:

Login to the vROps admin interface at – https://<master-node-FQDN-or-IP-address>/admin and Take the cluster offline.



Once the Custer is Offline, Start the upgrade process by clicking on Software Upgrade and Click on “Install a Software Update”


Click browse and select PAK file , there are two files that we need to download from VMware website , one is OS upgrade and another one is application upgrade.


let’s first upgrade OS , choose OS upgrade .pak file and Click on Upload.

63Check the version.


Accept the License , check the Update information


and Finally Click on Install


Monitor the installation process.10.png

Since in my Lab i have many appliances and it took almost half an hour to upgrade the OS of all the appliances, will reboot all the appliances and comeback , once done with OS upgrade ,

Lets start with application upgrade.

Please follow the same series of steps, shown above but provide the virtual appliance update PAK file this time. Click Upload.


Check if we have choosen the right file to upgrade.


Click install in the Last step and finally monitor the upgrade progress…


After some time, the virtual appliance will restart. Once restarted you should log back in to the admin console and here it is all new HTML5 vRealize Operation Manager.


and here is the version information..


that’s it , Happy learning 🙂


NSX Controllers ?

In an NSX for vSphere environment, basically the management plane is responsible for providing the GUI interface and the REST API entry point to manage the NSX environment.

Control Plane

The control plane includes a three node cluster running the control plane protocols required to capture the system configuration and push it down to the data plane and data plane consists of VIB modules installed in the hypervisor during host preparation.

NSX Controller stores the following types of tables:

  • VTEP table -keeps track of what virtual network (VNI) is present on which VTEP/hypervisor.
  • MAC table – keeps track of VM MAC to VTEP IP mappings.
  • ARP table – keeps track of VM IP to VM MAC mappings.

Controllers maintain the routing information by distributing the routing data learned from the control VM to each routing kernel module in the ESXi hosts. The use of the controller cluster eliminates the need for multicast support from the physical network infrastructure. Customers no longer have to provision multicast group IP addresses.  They also no longer need to enable PIM routing or IGMP snooping features on physical switches or routers. Logical switches need to be configured in unicast mode to avail of this feature.

NSX Controllers support an ARP suppression mechanism that reduces the need to flood ARP broadcast requests across the L2 network domain where virtual machines are connected. This is achieved by converting the ARP broadcasts into Controller lookups. If the controller lookup fails, then normal flooding will be used.

The ESXi host, with NSX Virtual Switch, intercepts the following types of traffic:

  • Virtual machine broadcast
  • Virtual machine unicast
  • Virtual machine multicast
  • Ethernet requests
  • Queries to the NSX Controller instance to retrieve the correct response to those requests

Each controller node is assigned a set of roles that define the tasks it can implement. By default, each controller is assigned all the following roles:

  • API Provider:  Handles HTTP requests from NSX Manager
  • Persistence Server: Persistently stores network state information
  • Logical Manager: Computes policies and network topology
  • Switch Manager: Manages the hypervisors and pushes configuration to the hosts
  • Directory Server: Manages VXLAN and distributed logical routing information

One of the controller nodes is elected as a leader for each role.so may be controller 1 elected as the leader for the API Provider and Logical Manager.controller 2 as the leader for Persistence Server and Directory Server and controller 3 has been elected as the leader for the Switch Manager role.

The leader for each role is responsible for allocating tasks to all individual nodes in the cluster. This is called slicing and slicing is being used to increase the scalability characteristics of the NSX architecture , slicing ensure that all the controller nodes can be active at any given time


The leader of each role maintains a sharding db table to keep track of the workload. The sharding db table is calculated by the leader and replicated to every controller node. It is used by both VXLAN and distributed logical router, known as DLR. The sharding db table may be recalculated at cluster membership changes, role master changes, or adjusted periodically for rebalancing.

In case of the failure of a controller node, the slices for a given role that were owned by the failed node are reassigned to the remaining members of the cluster.  Node failure triggers a new leader election for the roles originally led by the failed node.

Control Plane Interaction

  • ESXi hosts and NSX logical router virtual machines learn network information and send it to NSX Controller through UWA.
  • The NSX Controller CLI provides a consistent interface to verify VXLAN and logical routing network state information.
  • NSX Manager also provides APIs to programmatically retrieve data from the NSX Controller nodes in future.


Controller Internal Communication

The Management Plane communicates to the Controller Cluster over TCP/443.The Management Plane communicates directly with the vsfwd agent in the ESXi host over TCP/5671 using RabbitMQ, to push down firewall configuration changes.

The controllers communicates to the netcpa agent running in the ESXi host over TCP/1234 to propagate L2 and L3 changes. Netcpa then internally propagates these changes to the respective routing and VXLAN kernel modules in the ESXi host. Netcpa also acts as a middleman between the vsfwd agent and the ESXi kernel modules.

NSX Manager chooses a single controller node to start a REST API call. Once the connection is established, the NSX Manager transmits the host certificate thumbprint, VNI and logical interface information to the NSX Controller Cluster.

All the date transmitted by NSX Manager can be found in the file config-by-vsm.xml in the directory /etc/vmware/netcpa on the ESXi host. File /var/log/netcpa.log, can be helpful in troubleshooting the communication path between the NSX Manager, vsfwd and netcpa.

Netcpa randomly chooses a controller to establish the initial connection that is called core session and thsi core session is used to transmit the Controller Sharding table to the hosts, so they are aware of who is responsible for a particular VNI or routing instance.


Hope this helps you in understanding NSX Controllers.Happy Learning 🙂

NSX6.3 – Finally NSX supports vSphere 6.5

VMware released both NSX for vSphere 6.3.o as well as vCenter 6.5a and ESXi 6.5a , we must ensure that before upgrade your NSX environment to 6.3 you would need to upgrade your vCenter and ESXi hosts to 6.5a as describe in the KB 2148841.

This version has introduced a lot of great new features and enhancements with just a few standouts listed below which appealed to me :

Controller Disconnected Operation (CDO) mode: A new feature called Controller Disconnected Operation (CDO) mode has been introduced. This mode ensures that data plane connectivity is unaffected when hosts lose connectivity with the controller.(i have seen few customers were affected due to this).

Controller Disconnected Operation (CDO) mode ensures that the data plane connectivity is unaffected when host lose connectivity with the controller. If you find issues with controller, you can enable CDO mode to avoid temporary connectivity issues with the controller.

You can enable CDO mode for each host cluster through transport zone. CDO mode is disabled by default.

When CDO mode is enabled, NSX Manager creates a special CDO logical switch, one for every transport zone. VXLAN Network Identifier (VNI) of the special CDO logical switch is unique from all other logical switches. When CDO mode is enabled, one controller in the cluster is responsible for collecting all the VTEPs information reported from all transport nodes, and replicate the updated VTEP information to all other transport nodes. When a controller fails, a new controller is elected as the new master to take the responsibility and all transport nodes connected to original master are migrated to the new master and data is synced between the transport nodes and the controllers.

If you add new cluster to transport zone, NSX Manager pushes the CDO mode and VNI to the newly added hosts. If you remove the cluster, NSX Manager removes the VNI data from the hosts.

When you disable the CDO mode on transport zone, NSX Manager removes CDO logical switch.

In Cross-vCenter NSX environment, you can enable CDO mode only on the local transport zones or in a topology where you do not have any local transport zones and have a single universal transport zone for the primary NSX Manager. The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers.

You can enable CDO mode on local transport zone for secondary NSX Manager.

Primary NSX Manager: You can enable CDO mode only on transport zones that do not share the same distributed virtual switch. If the universal transport zone and the local transport zones share the same distributed virtual switch, then CDO mode can be enabled only on the universal transport zone.

Secondary NSX Manager: The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers. You can enable CDO mode on local transport zones, if they do not share the same distributed virtual switch.

Cross-vCenter NSX Active-Standby DFW Enhancements:

  • Multiple Universal DFW sections are now supported. Both Universal and Local rules can consume Universal security groups in source/destination/AppliedTo fields.
  • Universal security groups: Universal Security Group membership can be defined in a static or dynamic manner. Static membership is achieved by manually adding a universal security tag to each VM. Dynamic membership is achieved by adding VMs as members based on dynamic criteria (VM name).
  • Universal Security tags: You can now define Universal Security tags on the primary NSX Manager and mark for universal synchronization with secondary NSX Managers. Universal Security tags can be assigned to VMs statically, based on unique ID selection, or dynamically, in response to criteria such as antivirus or vulnerability scans.
  • Unique ID Selection Criteria: In earlier releases of NSX, security tags are local to a NSX Manager, and are mapped to VMs using the VM’s managed object ID. In an active-standby environment, the managed object ID for a given VM might not be the same in the active and standby datacenters. NSX 6.3.x allows you to configure a Unique ID Selection Criteria on the primary NSX Manager to use to identify VMs when attaching to universal security tags: VM instance UUID, VM BIOS UUID, VM name, or a combination of these options.

    • Unique ID Selection Criteria
      •  ■

        Use Virtual Machine instance UUID (recommended) – The VM instance UUID is generally unique within a VC domain, however there are exceptions such as when deployments are made through snapshots. If the VM instance UUID is not unique we recommend you use the VM BIOS UUID in combination with the VM name.

        Use Virtual Machine BIOS UUID – The BIOS UUID is not guaranteed to be unique within a VC domain, but it is always preserved in case of disaster. We recommend you use BIOS UUID in combination with VM name.

        Use Virtual Machine Name – If all of the VM names in an environment are unique, then VM name can be used to identify a VM across vCenters. We recommend you use VM name in combinationwith VM BIOS UUID.



Control Plane Agent (netcpa) Auto-recovery: An enhanced auto-recovery mechanism for the netcpa process ensures continuous data path communication. The automatic netcpa monitoring process also auto-restarts in case of any problems and provides alerts through the syslog server. A summary of benefits:

  • automatic netcpa process monitoring.
  • process auto-restart in case of problems, for example, if the system hangs.
  • automatic core file generation for debugging.
  • alert via syslog of the automatic restart event.

Log Insight Content Pack: This has been updated for Load Balancer to provide a centralized Dashboard, end-to-end monitoring, and better capacity planning from the user interface (UI).

Drain state for Load Balancer pool members: You can now put a pool member into Drain state, which forces the server to shutdown gracefully for maintenance. Setting a pool member to drain state removes the backend server from load balancing, but still allows the server to accept new, persistent connections.

4-byte ASN support for BGP: BGP configuration with 4-byte ASN support is made available along with backward compatibility for the pre-existing 2-byte ASN BGP peers.


Improved Layer 2 VPN performance: Performance for Layer 2 VPN has been improved. This allows a single Edge appliance to support up to 1.5 Gb/s throughput, which is an improvement from the previous 750 Mb/s.

Improved Configurability for OSPF: While configuring OSPF on an Edge Services Gateway (ESG), NSSA can translate all Type-7 LSAs to Type-5 LSAs.

DFW timers: NSX 6.3.0 introduces Session Timers that define how long a session is maintained on the firewall after inactivity. When the session timeout for the protocol expires, the session closes. On the firewall, you can define timeouts for TCP, UDP, and ICMP sessions and apply them to a user defined set of VMs or vNICs.

Linux support for Guest Introspection: NSX 6.3.0 enables Guest Introspection for Linux VMs. On Linux-based guest VMs, NSX Guest Introspection feature leverages fanotify and inotify capability provided by the Linux Kernel.

Publish Status for Service Composer: Service Composer publish status is now available to check whether a policy is synchronized. This provides increased visibility of security policy translations into DFW rules on the host.

NSX kernel modules now independent of ESXi version: Starting in NSX 6.3.0, NSX kernel modules use only the publicly available VMKAPI so that the interfaces are guaranteed across releases. This enhancement helps reduce the chance of host upgrades failing due to incorrect kernel module versions. In earlier releases, every ESXi upgrade in an NSX environment required at least two reboots to make sure the NSX functionality continued to work (due to having to push new kernel modules for every new ESXi version).

Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded to NSX 6.3.0, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This includes the NSX 6.3.x VIB install that is required after an ESXi upgrade, any NSX 6.3.x VIB uninstall, and future NSX 6.3.0 to NSX 6.3.x upgrades. Upgrading from NSX versions earlier than 6.3.0 to NSX 6.3.x still requires that you reboot hosts to complete the upgrade. Upgrading NSX 6.3.x on vSphere 5.5 still requires that you reboot hosts to complete the upgrade.

Happy learning 🙂




VMware NSX for vSphere 6.2.5 Released

VMware has released NSX for vSphere 6.2.5, this is a maintenance release that contains the bug fixes

The 6.2.5 release is a bugfix release that addresses a loss of network connectivity after performing certain vMotion operations in installations where some hosts ran a newer version of NSX, and other hosts ran an older version of NSX.

Details of fixes can be checked Here

But remember still vSphere 6.5 is not supported !!!!!!!!!!

QoS Tagging and Traffic Filtering on vDS

There are two types of QoS Marking/Tagging common in networking are 802.1p (COS) applied on Ethernet(Layer 2) packets and Differentiated Service Code Point (DSCP) Marking applied on IP packets. The physical network devices use these tags to identify important traffic types and provide Quality of Service based on the value of the tag. As business critical and latency sensitive applications are virtualized and run in parallel with other applications on ESXi hosts, it is important to enable traffic management and tagging features on the VDS.

The traffic management feature on the vDS helps reserve bandwidth for important traffic types, and the tagging feature allows the external physical network to understand the level of importance of each traffic type. It is a best practice to tag the traffic near the source to help achieve end-to-end Quality of Service (QoS). During network congestion scenarios, the tagged traffic doesn’t get dropped which translates to a higher Quality of Service (QoS) for the tagged traffic.

Once the packets are classified based on the qualifiers described in the traffic filtering section, users can choose to perform Ethernet (layer2) or IP (layer 3) header level marking. The markings can be configured at the port group level.

Lets Configure this , so that i can ensure that my business critical VMs are getting higher priority at physical layer…

Login to the Web Client and  click on dvSwitch and choose Port Group on which you want to apply TAG:

  1. Click on Manage tab
  2. Select the Settings option
  3. Select Policies
  4. Click on Edit


  1. Click on Traffic filtering and marking
  2. In the Status drop down box choose Enabled
  3. Click the Green + to add a New Network Traffic Rule


  1. In the Action: drop down box select Tag (default)
  2. Check the box to the right of DSCP value
  3. In the drop down box for the DSCP value select Maximum 63
  4. In the Traffic direction drop down box select Ingress
  5. Click the Green +


New Network Traffic Rule – Qualifier

Now that you have decided to tag the traffic the next question is which traffic you would like to tag.There are three options available while defining the qualifier:

  • System Traffic Qualifier
  • New MAC qualifier
  • New IP Qualifier

This means you have options to select packets based on system traffic types, MAC header or IP header fields. here we will create qualifier based on system traffic.

Select New System Traffic Qualifier from the drop down menu qos4.gif

  1. Select Virtual Machine
  2. Click OK


Check that your rule matches Name: Network Traffic Rule 1

  1. Action: Tag
  2. DSCP Value: Checked
  3. DSCP Value: 63 Traffic
  4. Direction: Ingress
  5. System traffic: Virtual Machine
  6. Click OK


Same way you can also allow/block the traffic:

Again go to dvPort – Settings – Policies –  Edit – Traffic Filtering and Marking and edit the existing rule that we have created above and change Action to Drop.


  1. Click the Green + to add a new qualifier.
  2. Select New IP Qualifier… from the drop down list.


  1. Select ICMP from the Protocol drop down menu.
  2.  Select Source address IP address is of your VM , in my case it is
  3. Click OK


  1. and finally Click OK , this will drop the ping for that particular VM.

So same way you can write many rules with various permutation and combinations to help your organisation to achive QoS and traffic filtering on dVS. I hope this helps you in your environments. Happy Learning 🙂

Learn NSX – Part-13 (Configure OSPF for DLR)

Configuring OSPF on a logical router enables VM connectivity across logical routers and from logical routers to edge services gateways (ESGs).

OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal cost.An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing tables. and an area is a logical collection of OSPF networks, routers, and links that have the same area identification. basically areas are identified by an Area ID.

Before we proceed with OSPF configuration , first on our deployed DLR, a Router ID must be configured , a to configure Router ID:

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3 – Double-click the distributed logical router on which to configure OSPF.


4 – On the Manage tab:

  1. Select the Routing
  2. Select Global Configuration section from the options on the left.
  3. Under the Dynamic Routing Configuration section, click Edit.


5 – In the Edit Dynamic Routing Configuration dialog box:

  1. Select an interface from the Router ID drop-down menu to use as the OSPF Router ID.
  2. Select the Enable Logging check box.
  3. Select Info from the Log Level drop-down menu.
  4. Click OK.


6 – Click Publish Changes.


Now you have configured Router ID , now let’s configure OSPF:

1 – Select the OSPF section from the options on the left:

  1. Under the Area Definitions section, click the green + icon to add an OSPF area.


2 – In the New Area Definition dialog box:

  1. Enter the OSPF area ID in the Area ID text box.
  2. Select the required OSPF area type from the Type drop-down menu:
    1. Normal
    2. NSSA, which prevents the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
    3. Select the required authentication type from the Authentication drop-down menu and enter a password, this is optional.
    4. Click OK.


3 – Under the Area to Interface Mapping section, click the green + icon.


4 – In the New Area to Interface Mapping dialog box:

  1. Select an appropriate uplink interface in the Interface drop-down menu.
  2. Select the OSPF area from the Area drop-down menu.
  3. Enter 1 in the Hello Interval text box.
  4. Enter 3 in the Dead Interval text box.
  5. Enter 128 in the Priority text box.
  6. Enter 1 in the Cost text box.
  7. Click OK.


5 – Under the OSPF Configuration section, click Edit.

  1. In the OSPF Configuration dialog box:
  2. Select the Enable OSPF check box.
  3. Enter the OSPF protocol address in the Protocol Address text box.
  4. Enter the OSPF forwarding address in the Forwarding Address text box.
  5. Select the Enable Graceful Restart check box for packet forwarding to be uninterrupted during restart of OSPF services.
  6. Select the Enable Default Originate check box to allow the NSX Edge to advertise itself as a default gateway to its peers .(optional)
  7. Click OK.


6 – Click Publish Changes.


7 – Select the Route Redistribution section from the options on the left:

  1. Click Edit.
  2. Select OSPF.
  3. Click OK.



8 – Under the Route Redistribution table section, click the green + In the New Redistribution criteria dialog box:

  1. Select the IP prefixes from the Prefix Name drop-down menu
  2. Select OSPF from the Learner Protocol drop-down menu.
  3. Select the necessary appropriate Allow Learning From
  4. Select Permit from the Action drop-down menu.
  5. Click OK.


9 – Click Publish Changes.


Our Lab topology will be as below , in my other posts , where i have created Logical switches , add one interface to this DLR as per below topology , this will allow VMs connected to both the logical switches will now be able to talk to each other because the logical router’s connected routes ( and are advertised into OSPF.




New vSphere 6 .5  has been release with New File system after VMFS 5 released in 2011. Good news is that VMFS6 supports 512 Devices and 2048 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon.

Automatic Space Reclamation is the next feature that many of customers/administrators have been waiting for. basically this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. using Unmap storage capacity is reclaimed and released to the array so that when needed other volumes can use those blocks. Previously we need to run Unmap command to reclaim  the blocks, now this has been integrated in the UI and can be turned on or off from GUI.

SESparse will be the snapshot format supported on VMFS6, vSphere will not be supporting VMFSparse snapshot format in VMFS6, it will continue to be supported on VMFS5. Both VMFS 6 and VMFS 5 can co-exist but there is no straight forward  upgrade from VMFS5 to VMFS6. After you upgrade your ESXi hosts to version 6.5, you can continue using any existing VMFS5 datastores. To take advantage of VMFS6 features, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6 datastore. You cannot upgrade the VMFS5 datastore to VMFS6. right now SE Sparse is used primarily for View and for LUNs larger than 2TB. with VMFS 6 the default will be SE Sparse.

Comparison is as below:

Features and Functionalities VMFS5 VMFS6
Access for ESXi 6.5 hosts Yes Yes
Access for ESXi hosts version 6.0 and earlier Yes No
Datastores per Host 1024 1024
512n storage devices Yes (default) Yes
512e storage devices Yes. Not supported on local 512e devices. Yes (default)
Automatic space reclamation No Yes
Manual space reclamation through the esxcli command. Yes Yes
Space reclamation form guest OS Limited Yes
GPT storage device partitioning Yes Yes
MBR storage device partitioning Yes

For a VMFS5 data store that has been previously upgraded from VMFS3.

Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes
Support of small files of 1 KB Yes Yes
Default use of ATS-only locking mechanisms on storage devices that support ATS. Yes Yes
Block size Standard 1 MB Standard 1 MB
Default snapshots VMFSsparse for virtual disks smaller than 2 TB.

SEsparse for virtual disks larger than 2 TB.

Virtual disk emulation type 512n 512n
vMotion Yes Yes
Storage vMotion across different datastore types Yes Yes
High Availability and Fault Tolerance Yes Yes
DRS and Storage DRS Yes Yes
RDM Yes Yes

Happy Learning 🙂

Creating IP Sets with NSX API Calls using PowerShell

Fed days back one of my close friend working with a customer , whose NSX environment is  crossvCenter NSX environment , because of this they have to create IP sets as IP Sets are the universal objects which get synchronized with secondary instance of NSX. since this IP Sets list was big so we were looking for automating using NSX API but don’t want to make many API calls manually , so we have decided to do it using PowerShell way and we decided to use Invoke-RestMethod , which was introduced in PowerShell 3.0


The Invoke-RestMethod cmdlet sends HTTP and HTTPS requests to Representational State Transfer (REST) web services that returns richly structured data.Windows PowerShell formats the response based to the data type. For an RSS or ATOM feed, Windows PowerShell returns the Item or Entry XML nodes. For JavaScript Object Notation (JSON) or XML, Windows PowerShell converts (or deserializes) the content into objects.

CSV File Format used in Script:


Script to Create IP Sets from CSV is as below:





Finally ,using Invoke-RestMethod or Invoke-WebRequest , we can automate many of routine tasks of NSX using PowerShell. Hope this helps in your daily routine administrative tasks  :).


Learn NSX – Part-12 (Create NSX Edge Services Gateway)

Each NSX Edge virtual appliance can have a total of 10 uplink and internal network interfaces.Overlapping IP addresses are not allowed for internal interfaces, and overlapping subnets are not allowed for internal and uplink interfaces.

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3- Click the green + icon to add a new NSX Edge.


4- In the Name and description dialog box:

  1. Select Edge Services Gateway as the Install Type.
  2. Enter a name for the NSX Edge services gateway in the Name text box. The name should be unique across all NSX Edge services gateways within a tenant.
  3. Enter a host name for the NSX Edge services gateway in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Confirm that Deploy NSX Edge is selected (default).
  7. Select Enable High Availability to enable and configure high availability.
  8. Click Next.


5 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes. The password must be 12 to 255 characters and must contain the following:
    1. At least one upper case letter
    2. At least one lower case letter
    3. At least one number
    4. At least one special character
  3. Select the Enable SSH access check box.
  4. Select the Enable auto rule generation check box.
  5. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  6. Click Next.


6 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop-down menu.
  2. Select the appropriate Appliance Size.
  3. Click the green + icon in the NSX Edge Appliances
  4. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  5. Select the datastore from the Datastore drop-down menu.
  6. (Optional) Select the host from the Host drop-down menu.
  7. (Optional) Select the folder from the Folder drop-down menu.
  8. Click OK and click Next.



7 – In the Configure Interfaces dialog box:

  1. Under the Configure interfaces of this NSX Edge section, click the green + icon to create an interface.
    1. NOTE – You must add at least one internal interface for HA to work.
  2. Enter the NSX Edge interface name in the Name text box.
  3. Select Internal or Uplink as the
  4. Click Change next to the Connected To selection box to choose the appropriate logical switch, standard port group or distributed port group with which to connect the interface.
  5. Select Connected for the Connectivity Status.
  6. Assign a primary IP address and subnet prefix length.
  7. Select the appropriate options.
  8. Select Enable Proxy ARP for overlapping network forwarding between different interfaces. Select Send ICMP Redirect to convey routing information to hosts.
  9. Click OK and click Next.


7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.


8 – In the Firewall and HA dialog box:

  1. Select the Configure Firewall default policy check box.
  2. Select Accept for Default Traffic Policy.
  3. Select Disable for Logging.
  4. (Optional) If high availability is enabled, complete the Configure HA parameters By default, HA automatically chooses an internal interface and automatically assigns link-local IP addresses.
  5. Click Next.


NOTE – If ANY is selected for the high availability interface but there are no internal interfaces configured, the user interface will not display an error. Two NSX Edge appliances will be created, but because there is no internal interface configured, the new NSX Edge appliances remain in standby and high availability is disabled. After an internal interface is configured, high availability will be enabled on the NSX Edge appliances.

9 – In the Ready to complete dialog box, review the configuration and click Finish.


Happy Learning 🙂

Learn NSX – Part-11 (Create Distributed Logical Router)

In the Previous post, We have discussed about creating logical switches and now workloads have L2 adjacency across IP subnets with the help of VXLAN. In this post, we are going to enable routing between multiple Logical switches.



An NSX Edge logical router provides routing and bridging functionality. With distributed routing, virtual machines that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface.

Prerequisites before deploying DLR

  • At least one deployed NSX Controller node
  • At least one logical switch

NOTE – A DLR router instance cannot be connected to logical switches that exist in different transport zones.

1 – Log in to the vSphere Web Client and click Networking & Security.

2 – Select NSX Edges under the Networking & Security section.


Click the green + icon to add a new NSX Edge.


3 – In the Name and description dialog box:

  1. Select Logical (Distributed) Router as the Install Type.
  2. Enter the name in the Name text box. This name appears in your vCenter inventory and should be unique across all logical routers within a single tenant.
  3. Enter a host name for the distributed logical router in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Deploy Edge Appliance is selected by default. An NSX Edge appliance is required for dynamic routing and the logical router. (This is only required only if you want to dynamic routing and firewalling).NOTE – An NSX Edge appliance cannot be added to the logical router after the logical router has been created.
  7. Select Enable High Availability to enable and configure high availability. High availability is required for dynamic routing.
  8. Click Next.


4 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes.
  3. Select the Enable SSH access check box.
  4. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  5. Click Next.


5 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop down list.
  2. Click the green + icon in the NSX Edge Appliances
  3. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  4. Select the datastore from the Datastore drop-down menu.
  5. Select the host from the Host drop down list (Optional).
  6. Select the folder from the Folder drop-down menu (Optional).
  7. Click OK and click Next.



6 – In the Configure Interfaces dialog box:

  1. Under the HA Interface Configuration section, click Select next to the Connected To selection box to choose the appropriate logical switch or distributed port group. Generally, this interface should be connected to the management distributed port group.
  2. Under the Configure interfaces of this NSX Edge section, click the green + icon to create a logical interface.
  3. Enter the logical router interface name in the Name text box.
  4. Select the Internal or Uplink radio button as the Type.
  5. Click Change next to the Connected To selection box to choose the appropriate logical switch with which to connect the interface.
  6. Select Connected for Connectivity Status.
  7. Under the Configure subnets section, click the green + icon and assign an IP address and subnet prefix length. Click OK.
  8. Repeat steps b through g for all interfaces to be created.
  9. Click OK and click Next.



7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.


8 – In the Ready to complete dialog box, review the configuration and click Finish.


NOTE – for HA interface do not use an IP address that exists somewhere else on your network, even if that network is not directly attached to the NSX Edge.

Happy Learning 🙂

Docker Engine for Windows Server 2016 Now Available


Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows.the expansion of the Docker platform beyond Linux workloads to support Windows Server applications. The Commercially Supported Docker Engine (CS Docker Engine) is now available at no additional cost with every edition of Windows Server 2016.

Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may also be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.

Once Windows Server 2016 is running, log in, run Windows Update to ensure you have all the latest updates and install the Windows-native Docker Engine directly (that is, not using “Docker for Windows”). Run the following in an Administrative PowerShell prompt:

# Add the containers feature and restart
Install-WindowsFeature containers
Restart-Computer -Force

# Download, install and configure Docker Engine
Invoke-WebRequest "https://download.docker.com/components/engine/windows-server/
cs-1.12/docker.zip" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing

Expand-Archive -Path "$env:TEMP\docker.zip" -DestinationPath $env:ProgramFiles

# For quick use, does not require shell to be restarted.
$env:path += ";c:\program files\docker"

# For persistent use, will apply even after a reboot. 
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\
Docker", [EnvironmentVariableTarget]::Machine)

# Start a new PowerShell prompt before proceeding
dockerd --register-service
Start-Service docker

For More Details Click Here

Guys Let’s learn it before it becomes main stream in the industry 🙂


NFV, SDN and VMware NSX


After Compute Virtualization Network Functions Virtualization (NFV) is the next step, moving physical networking devices/equipment and running it in a VM. NFV separates network functions from routers, firewalls, load balances and other dedicated hardware devices and allows network services to be hosted on virtual machines. Virtual machines have a hypervisor, also called a virtual machine manager, which allows multiple operating systems to share a single hardware processor. When the hypervisor controls network functions those services that required dedicated hardware can be performed on standard x86 servers.


SDN makes the network programmable by separating the control plane (telling the network what goes where) from the data plane (sending packets to specific destinations). It relies on switches that can be programmed through an SDN controller using an industry standard control protocol, such as Open Flow.

SDN separates control and data and centralizes control and programmability of the network. NFV transfers network functions from dedicated appliances to generic servers.
Network intelligence is logically centralized in SDN controller software that maintains a global view of the network, which appears to applications and policy engines as a single, logical switch. As virtual device, they require no extra space, power or cooling.
SDN operates in a campus, data center and/or cloud environment Physical devices often rapidly reach EOL or run out of capacity and need to be upgraded to large models
SDN software targets cloud orchestration and networking NFV targets the service provider network
  NFV software targets routers, firewalls, gateways, WAN, CDN, accelerators and SLA assurance

SDN and NFV Are Better Together

 These approaches are mutually beneficial, but are not dependent on one another. You do not need one to have the other. However, the reality is SDN makes NFV and NV more compelling and visa-versa. SDN contributes network automation that enables policy-based decisions to orchestrate which network traffic goes where, while NFV focuses on the services, and NV ensures the network’s capabilities align with the visualized environments they are supporting.

VMware NSX

VMware NSX is a hypervisor networking solution designed to manage, automate, and provide basic Layer 4-7 services to virtual machine traffic. NSX is capable of providing switching, routing, and basic load-balancer and firewall services to data moving between virtual machines from within the hypervisor. For non-virtual machine traffic (handled by more than 70% of data center servers), NSX requires traffic to be sent into the virtual environment. While NSX is often classified as an SDN solution, as per my understanding that is really not the case.

SDN is defined as providing the ability to manage the forwarding of frames/packets and apply policy; to perform this at scale in a dynamic fashion; and to be programmed. This means that an SDN solution must be able to forward frames. Because NSX has no hardware switching components, it is not capable of moving frames or packets between hosts, or between virtual machines and other physical resources. In my view, this places VMware NSX into the Network Functions Virtualization (NFV) category. NSX virtualizes switching and routing functions, with basic load-balancer and firewall functions.



NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.


VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the data center using NSX.

NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.

Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack.

Application Continuity:

NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center in conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.

NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.



Logical switching enables extension of a L2 segment / IP subnet anywhere in the fabric independent of the physical network design.


Routing between IP subnets can be done in the logical space without traffic leaving the hypervisor; routing is performed directly in the hypervisor kernel with minimal CPU / memory overhead. This distributed logical routing (DLR) provides an optimal data path for traffic within the virtual infrastructure means east-west communication. Additionally, the NSX Edge provides an ideal centralized point for seamless integration with the physical network infrastructure to handle communication with the external network (i.e., north-south communication) with ECMP-based routing.

Connectivity to physical networks:

L2 and L3 gateway functions are supported within NSX to provide communication between workloads deployed in logical and physical spaces.

Edge Firewall:

Edge firewall services are part of the NSX Edge Services Gateway (ESG). The Edge firewall provides essential perimeter firewall protection which can be used in addition to a physical perimeter firewall. The ESG-based firewall is useful in developing PCI zones, multi-tenant environments, or dev-ops style connectivity without forcing the inter-tenant or inter-zone traffic onto the physical network.


L2 VPN, IPSEC VPN, and SSL VPN services to enable L2 and L3 VPN services. The VPN services provide critical use-case of interconnecting remote datacenters and users access.

Logical Load-balancing:

L4-L7 load balancing with support for SSL termination. The load-balancer comes in two different form factors supporting inline as well as proxy mode configurations. The load-balancer provides critical use case in virtualized environment, which enables devops style functionalities supporting variety of workload in topological independent manner.

DHCP & NAT Services:

Support for DHCP servers and DHCP forwarding mechanisms; NAT services. NSX also provides an extensible platform that can be used for deployment and configuration of 3rd party vendor services. Examples include virtual form factor load balancers (e.g., F5 BIG-IP LTM) and network monitoring appliances (e.g.,Gigamon – GigaVUE-VM).
Integration of these services is simple with existing physical appliances such as physical load balancers and IPAM/DHCP server solutions.

Distributed Firewall:

Security enforcement is done directly at the kernel and vNIC level. This enables highly scalable firewall rule enforcement by avoiding bottlenecks on physical appliances. The firewall is distributed in kernel, minimizing CPU overhead while enabling line-rate performance.

NSX also provides an extensible framework, allowing security vendors to providean umbrella of security services. Popular offerings include anti-virus/antimalware/anti-bot solutions, L7 firewalling, IPS/IDS (host and network based)services, file integrity monitoring, and vulnerability management of guest VMs.

I hope this will help you understanding what NSX does and its use case. Comments are welcome to make this post more accurate and interesting…:)

Learn NSX – Part-10 (Create Logical Switches)

A logical switch reproduces Layer 2 and Layer 3 functionality (unicast, multicast, or broadcast) in a virtual environment completely decoupled from underlying hardware.

  • Select Logical Switches under the Networking & Security section.
  • Click the green + icon to add a new logical switch.
  • 1.gif
  • In the New Logical Switch dialog box:
  • 2
    • Enter the logical switch name in the Name text box.
    • Enter a description in the Description text box. For ease of use, add the subnet that will be used within this logical switch.
    • In the Transport Zone selection box, click Change to choose the appropriate transport zone.
    • Leave the Replication Mode option, or select a new one if you want to overlap the transport zone replication mode. By default, the logical switch inherits the control plane mode from the transport zone.
    • Select the Enable IP Discovery check box to enable ARP suppression. This minimizes ARP flooding within individual VXLAN segments.
    • Select the Enable MAC Learning check box to avoid possible traffic loss during VMware vSphere vMotion.
    • Click OK.

Happy Learning 🙂

In My Previous post , i have demonstrated that how can we create LS using API , for details please check this Post

VMware Validated Design (SDDC) Available for download

The VMware Validated Design provides a set of prescriptive documents that explain how to plan, deploy, and configure a deployment of a Software-Defined Data Center (SDDC). This design supports a number of use cases, and is designed to minimize difficulty in integration, expansion, and operation, as well as future updates and upgrades.

In my opinion this is really going to help architects and administrators to align the design and deployment based on VMware recommendations that will help customers to more robust , highly available and supported environment.


Learn NSX – Part-09 (Transport Zone)

The Segment ID Pool specifies a range of VXLAN Networks Identifiers (VNIs) to use when building logical network segments. This determines the maximum number of logical switches that can be created in your infrastructure.

  • You must specify a segment ID pool for each NSX Manager to isolate your network traffic.

A transport zone defines the span of a logical switch by delineating the width of the VXLAN/VTEP replication scope and control plane. It can span one or more vSphere clusters. You can have one or more transport zones based on your requirements.

  • Log in to the vSphere Web Client and click Networking & Security.
  • Select Installation under the Networking & Security section and select the Logical Network Preparation tab.
  • Select the Segment ID menu and click Edit.
    • 1
  • Enter the range of numbers to be used for VNIs in the Segment ID pool text box and click OK.
    • 2
  • Select the Transport Zones menu and click the green + icon to add a new transport zone.
    • 3
  • In the New Transport Zone dialog box:
    • Enter the name of the transport zone in the Name text box.
    • (Optional) Add a description in the Description text box.
    • Depending on whether you have NSX Controller nodes in your environment or want to use multicast addresses, select the appropriate Replication mode (also known as the control plane mode).
    • Select the check boxes for each cluster to be added to the transport zone.
    • Click OK.
    • 4

We have now completed pre-requisite for virtual network deployment , in the next few posts i  will help you to deploy logical switching , routing etc…

Happy Learning 🙂

Containers in Windows Server 2016 TP5

I am just starting to get my head around the concept of Containers and decided to take Windows Server 2016 Technical Preview 5 for a test, which includes Docker Containers as a feature. It’s potentially worth being more explicit here (for anyone not aware) that this isn’t Microsoft’s version of Containers, this is docker container.

There are two types of containers supported by MS.

Windows Containers

  • Multiple container instances can run concurrently on a host, with isolation provided through namespace, resource control, and process isolation technologies. Windows Server containers share the same kernel with the host, as well as each other.

Hyper-V Containers

  • Multiple container instances can run concurrently on a host; however, each container runs inside of a special virtual machine. This provides kernel level isolation between each Hyper-V container and the container host.

Lets Deploy our first windows container host:

First Install Container Feature

The container feature needs to be enabled before working with Windows containers. To do so run the following command in an elevated PowerShell session.

Install-WindowsFeature containers

When the feature installation has completed, reboot the computer.

Restart-Computer -Force

Next Install Docker

Docker is required in order to work with Windows containers. Docker consists of the Docker Engine, and the Docker client. For this exercise, both will be installed.

New-Item -Type Directory -Path ‘C:\Program Files\docker\’

Invoke-WebRequest https://aka.ms/tp5/b/dockerd -OutFile $env:ProgramFiles\docker\dockerd.exe

Invoke-WebRequest https://aka.ms/tp5/b/docker -OutFile $env:ProgramFiles\docker\docker.exe

[Environment]::SetEnvironmentVariable(“Path”, $env:Path + “;C:\Program Files\Docker”, [EnvironmentVariableTarget]::Machine)

The commands above create a new Docker folder in Program Files and download the docker deamon and client binaires in this directory. The last one adds the directory in the Path environment variable.

To install Docker as a Windows service, run the following.

dockerd --register-service

Once installed, the service can be started.

Start-Service Docker

Install Base Container Images

Before working with Windows Containers, a base image needs to be installed. Base images are available with either Windows Server Core or Nano Server as the underlying operating system.

To install the Windows Server Core base image run the following:

Install-PackageProvider ContainerImage -Force
Install-ContainerImage -Name WindowsServerCore


Restart the Docker service: