Troubleshooting VXLAN vmknic

If VXLAN Connectivity isn’t operational, It means if a VM on a VXLAN cannot ping another one on the same logical switch the most common reason is a misconfiguration on the transport network.

As all of you are aware VXLAN has its own vmkernel networking stack , so ping connectivity testing between two different vmknics on the transport VLAN must be done from ESXi console using the syntax below:

ping ++netstack=vxlan -d -s 1572 -I vmk3  <vmknic IP>


vmkping ++netstack=vxlan <vmknic IP> -d -s <packet size>


esxcli network diag ping --netstack=vxlan --host <vmknic IP> --df --size=<packetsize>


If the ping fails, launch another one without the don’t fragment/size argument set

ping ++netstack=vxlan -I vmk3 <vmknic IP>

If this one succeed, it means your MTU isn’t correctly set to at least 1600 on your transport network.

++netstack=vxlan -> instruct the ESXi host to use the VXLAN TCP/IP stack.
-d -> set Don’t Fragment bit on IPv4 packet
-s 1572 -> set packet size to 1572 to check if MTU is correctly setup up to 1600
-I – > VXLAN vmkernel interface name
-<vmknic IP> ->  Destination ESXi host vmkernel IP Address.

If all the ping fails it’s a VLAN ID or Uplink misconfiguration. Before going any further you have to make sure that these pings works , than only we can successfully configure NSX virtual Networking.

Happy Learning 🙂



DRS Doctor

DRS Doctor is a command line tool that can be used to diagnose DRS behavior in VMware vCenter clusters. When run against a DRS enabled cluster, it records information regarding the state of the cluster, the work load distribution, DRS moves, etc., in an easy to read log format.

The goal of DRS Doctor is to give VI admins better insight into DRS and the actions it performs. It is very useful for analyzing DRS actions and troubleshooting issues with very little overhead. This is also an easy way for support engineers to read into customer environments without having to rely on developers to debug DrmDump logs in order to troubleshoot simple DRS issues.

DRS Doctor connects to the vCenter server and tracks the list of cluster related tasks and actions. It also tracks DRS recommendations generated and reasons for each recommendation, which is currently only available in a hard-to-read format in DrmDump files. At the end of each log, it dumps the Host and VM resource consumption data to give a quick overview of cluster state. It also provides an operational audit at the end of each log file.


Download Here

Prerequisites for Installation:

  • Requires Python 2.7.6 or higher
  • Requires Python modules “pyyaml” and “pyvmomi”

Note: The VMware vSphere API Python Bindings can be found here:

For Python versions less than 2.7.9, the pyVmomi version should be 5.5 (pip install pyvmomi== If using Python 2.7.9+ the version 6.0 of pyvmomi can be used.

For certificate validation (SSL) support, Python 2.7.9 or above and pyVmomi 6.0 is required.

NOTE –  DRS must be in partially automated mode in order for DRS Doctor to work. If your cluster is in fully automated mode, DRS Doctor will automatically change the mode to partially automated mode and apply the load balancing recommendations based on the threshold configured. (It will act just as it would in fully automated mode.) *Note: If you close DRS doctor you will need to ensure that the DRS automation settings get reverted to fully-automated mode.

VMware vROps Manager Fundamentals [V6.2] – Free e-learning

This free e-learning cours demonstrates how VMware vRealize® Operations Manager™ delivers intelligent operations management from applications to infrastructure across physical, virtual, and cloud environments.

– Explain how an analytics-based operational process addresses the challenges of IT      operations.
– Name the three main use cases for intelligent operations
– Describe how the architecture of vRealize Operations Manager supports scalability,         reliability, and extensibility
– Summarize the process to deploy vRealize Operations Manager
– Recognize how vRealize Operations Manager helps IT Operations: Optimize utilization
– Ensure performance and availability across the software-defined data center (SDDC)
Monitor heterogeneous data centers and hybrid clouds
– Browse for solutions to extend intelligent operations from applications to infrastructure across physical, virtual, and cloud environments

Learn NSX – Part-03 (Deploy NSX Manager)

Here comes the NSX Manager deployment Pre-requisite and Procedure…


  • You must be assigned the Enterprise Administrator or NSX for vSphere administrator role.
  • Verify that a datastore is configured and accessible on the target ESXi host. Shared storage is recommended.
  • The resource requirements are:
    • 4 vCPUs
    • 16 GB of memory (16 GB are reserved)
    • 60 GB of disk space
  • As a general guideline, if the NSX managed environment contains more than 256 hypervisors, VMware recommends to increase NSX Manager resources to 8 vCPU and 24GB of RAM.
  • Decide whether the NSX Manager will have IPv4 addressing only, IPv6 addressing only, or dual-stack network configuration. The host name of the NSX Manager will be used by other entities. Therefore, the NSX Manager host name must be mapped to the right IP address in the DNS servers used in the network.
  • Make sure that you know the IP address and gateway, DNS server IP addresses, domain search list, and the NTP server IP address that the NSX Manager will use.
  • The NSX Manager management interface, vCenter Server, and ESXi hosts must be reachable by all future VMware NSX Edge™ and NSX Guest Introspection instances.

Deploy NSX Manager

1. Download the NSX for vSphere 6.2 OVA file from the VMware Web site.
2. To deploy the OVA file:
a. Connect to the VMware vSphere Web Client.
b. Select the vCenter Server on which to deploy the appliance.
c. Select Actions and select Deploy OVF template.


3. In the Select source dialog box:
a. Choose Local File, click Browse and select the OVA file.
b. Click Next.


4. In the Review details dialog box, select the Accept extra configuration options check box and click Next.


5. In the Accept EULAs dialog, click Accept and click Next.


6. In the Select name and folder dialog box:
a. Enter the name for the NSX Manager appliance.
b. Select a folder or data center on which to deploy the virtual machine and click Next.
c. Select the appropriate host and cluster for deployment.


7. In the Select storage dialog box:
a. Select the destination datastore for the appliance and click Next.
b. (Optional) Select a VM Storage Policy if necessary.


8. In the Setup network dialog box, select the port group for the appliance and click Next.


9. In the Customer template dialog box:
a. Enter the CLI administrator user and privilege passwords.
b. Enter the Network properties.
c. Enter the DNS settings.
d. Enter the Services Configuration.
e. Click Next.


10. In the Ready to complete dialog box:
a. Review the configuration.
b. Select the Power on after deployment check box and click Finish.


In the Next part i will be covering NSX Manager Configuration….

Happy Learning 🙂

!!!!Passed VCIX-NV!!!!

Hi Friends , today set for VCIX-NV (advanced level of certification for VMware NSX) exam and really felling happy and exited to share that i have cleared it , now i am VCIX-NV certified.



The VCIX-NV exam is the advanced level of certification for VMware NSX in the Network Virtualization track. If you plan on passing this exam, you’ll need good knowledge on NSX-v , vSphere  and also some advanced networking concepts. Personally, I found this to be the most difficult VMware exam I’ve taken so far. The topics covered were fair, nothing was asked that was not clearly defined in the blueprint. I found the tasks to be fairly difficult, compared to what I had experienced in other VCAP exams. So this is one exam that you’ll really need lots of hands on experience to pass I believe it will make passing the exam very difficult otherwise.

The version of NSX used for the exam was 6.0.4, you can expect some minor differences if you are using the latest versions, 6.2.x. The exam environment here is Mumbai was very slow , require lots of patience while clicking on links . My Firefox browser did crash a few times though and I did have to restart it.

Advise for the guys who are preparing for VCIX-NV , have lots of practice specially on VMware Hols and get your networking and vSphere fundas refreshed.

Best of Luck !!!!!

Learn NSX – Part-02 (NSX Manager)

I hope first part of NSX Learn series should be as per your expectation and you must have now understanding , what components of NSX are part of which plane.

Now Here is the deployment architecture…

  1. First you will deploy NSX Manager.
  2. Register NSX Manager with vCenter.
  3. Deploy NSX Controllers.
  4. Prepare hosts , which internally will install vibs.
  5. Then deploy Edge Gateways and Network Services.


in future posts , I will cover all above NSX component and their Deployment one by one , Lets first start with – NSX Manager.

NSX Manager: provides the centralized management plane for the NSX for vSphere architecture and has a one-to-one mapping with vCenter Server for workloads. NSX Manager performs the following functions:

  • Provides a single point of configuration and the REST API entry points in a vSphere environment configured for NSX for vSphere.
  • Responsible for deploying NSX Controller clusters, NSX Edge distributed routers, and NSX Edge services gateways (in the form of OVF format appliances), Guest Introspection Services, and so on.
  • Responsible for preparing ESXi hosts for NSX for vSphere by installing VXLAN, distributed routing, and firewall kernel modules, as well as the User World Agent (UWA).
  • Communicates with NSX Controller clusters through REST and hosts through the VMware vFabric® RabbitMQ message bus. Note that this is an internal message bus specific to NSX for vSphere and does not require any additional services to be set up.
  • Generates certificates for the NSX Controller nodes and ESXi hosts to secure control plane communications with mutual authentication.

VMware NSX 6.2 allows linking multiple vCenter VMware NSX deployments together, and manage them from a single NSX Manager that is designated as primary.In such a Cross-vCenter NSX environment, there is both an NSX Manager primary instance, and one or more secondary instances. The primary NSX Manager instance is linked to the primary vCenter Server instance and allows the creation and management of universal logical switches, universal logical (distributed) routers and universal firewall rules. Secondary NSX Manager instances are used to manage networking services that are local to itself. Up to seven secondary NSX Manager instances can be associated with the primary NSX Manager in a Cross-vCenter NSX environment. The configuration of network services on all NSX Manager instances can be performed from one central location.

Note – that there is still a one-to-one relationship between an NSX Manager and a vCenter Server.

To manage all NSX Manager instances from the primary NSX Manager in a Cross-vCenter VMware NSX deployment, the vCenter Server instances must be connected with Platform Services Controllers in Enhanced Linked Mode. See the ESXi and vCenter Server 6.0 documentation for details.

An NSX manager outage may affect only specific functionalities such as identity based firewall or flow monitoring collection.

NSX manager data (e.g., system configuration, events, audit log tables) can be backed up at any time by performing an on-demand backup from the NSX Manager GUI. It is also possible to schedule periodic backups to be performed (e.g., hourly, daily or weekly). Restoring a backup is only possible on a freshly deployed NSX manager appliance that can access one of the previously backed up instances.

The NSX manager requires IP connectivity to vCenter, controller, NSX Edge resources, and ESXi hosts. NSX manager typically resides in the same subnet (VLAN) as vCenter and communicates over the management network. This is not a strict requirement; NSX manager supports inter-subnet IP communication where design constraints require subnet separation from vCenter (e.g., security policy, multi-domain management).

In Next post i will be covering How to deploy NSX Manager

Happy Learning 🙂

Learn NSX – Part-01

As promised in my last post , here  comes the first part of NSX learning….

NSX for vSphere creates a network virtualization layer on top of which all virtual networks are created. This layer is an abstraction between the physical and virtual networks. The components required to create this network virtualization layer are:

  • vCenter Server
  • NSX Manager
  • NSX Controller
  • NSX Virtual Switch
  • NSX for vSphere API


As per the above figure , these components are separated into three different planes to create communications boundaries and provide isolation of workload data from system control messages:

  • Data plane
  • Control plane
  • Management plane

Data Plane –  The NSX data plane is implemented by the NSX vSwitch. The vSwitch in NSX for vSphere is based on the VDS with additional components added to enable rich services. The add-on NSX components include kernel modules distributed as VMware installation bundles (VIBs). These modules run within the hypervisor kernel, providing services including distributed routing, distributed firewall, and VXLAN to VLAN bridging. The NSX VDS abstracts the physical network, providing access-level switching in the hypervisor. This is central to network virtualization as it enables logical networks that are independent of physical constructs (e.g., VLANs).
The NSX vSwitch enables support for overlay networking with the use of the
VXLAN protocol and centralized network configuration.Overlay networking with NSX enables the following capabilities:

• Creation of a flexible logical layer 2 (L2) overlay over existing IP networks on
existing physical infrastructure.
• Agile provisioning of communication – both east–west and north–south –
while maintaining isolation between tenants.
• Application workloads and VMs that are agnostic of the overlay network, operating as if   they were connected to a physical network.

The data plane also consists of gateway devices that can provide communication
from the logical networking space to the physical network (e.g., VXLAN to
VLAN). This functionality can happen at either L2 (NSX bridging) or at L3 (NSX

In Simple English ” Where your Data Runs ” , if any Management Plane (NSX Manager)  or Control Plane (NSX controller ) is Down , still your data traffic is not affected .

Control Plane – The control plane is where network virtualization control messages are located. The NSX controller is a key part of the NSX control plane. In a vSphere environment with the vSphere Distributed Switch (VDS), the controller enables multicast free VXLAN and control plane programming of elements such as the Distributed Logical Routing (DLR).
The NSX controller is a part of the control plane; it is logically separated from all data plane traffic. To further enhance high availability and scalability, NSX controller nodes are deployed in a cluster of odd number instances.
In addition to controller, the control VM, provides the routing control plane that allows the local forwarding in ESXi and allows dynamic routing between ESXI and north-south routing provided by Edge VM. It is critical to understand that data plane traffic never traverses the control plane component.

Management Plane is where the network virtualization orchestration happens. In this layer, cloud management platforms such as VMware vRealize™ Automation can be used to request, consume, and destroy networking resources for virtual workloads.

The NSX manager is the management plane for the NSX eco-system. NSX manager provides configuration and orchestration of:
• Logical networking components – logical switching and routing
• Networking and Edge services
• Security services and distributed firewall
Edge services and security services can be provided by either built-in components of NSX Manager or by integrated 3rd party vendors. NSX manager allows seamless orchestration of both built-in and external services.

All security services, whether built-in or 3rd party, are deployed and configured by
the NSX management plane. The management plane provides a single window
for viewing services availability. It also facilitates policy based service chaining,
context sharing, and inter-service events handling. This simplifies the auditing of
the security posture, streamlining application of identity-based controls. (e.g., AD,
mobility profiles).

Happy Learning 🙂