Friends, In my Previous NSX series posts , we have successfully deployed NSX Manager , now to move on further , Next thing is deploy NSX controllers , in this post i will explain you what is the role of NSX controllers and next post we will deploy Controller cluster.
The NSX Controller cluster is the control plane component that is responsible for managing the switching and routing modules in the hyper-visors.The controller cluster consists of controller nodes that manage specific logical switches. The use of controller cluster in managing VXLAN based logical switches eliminates the need for multicast configuration at the physical layer for VXLAN overlay.
NSX Controller nodes perform the following functions:
- Provides control plane to distribute VXLAN and logical routing information to ESXi hosts.
- Nodes are clustered for scale-out and high availability.
- Network information is sliced across nodes in a cluster for redundancy purposes.
- Eliminates the need for multicast support from the physical network infrastructure.
- Provides ARP-suppression of broadcast traffic in VXLAN networks.
NSX Controller nodes are deployed in a cluster with a minimum of three members to provide high availability and scale.The high availability of NSX Controller reduces downtime in the case of one physical host failure.
Below information has been taken from NSX Reference Design.
For resiliency and performance, production deployments of controller VM should be in three distinct hosts. The NSX controller cluster represents a scale-out distributed system, where each controller node is assigned a set of roles that define the type of tasks the node can implement.In order to increase the scalability characteristics of the NSX architecture, a slicing mechanism is utilized to ensure that all the controller nodes can be active at any given time.

Above Figure illustrates the distribution of roles and responsibilities between all three cluster nodes. This demonstrates how distinct controller nodes act as master for given entities such as logical switching, logical routing and other services. Each node in the controller cluster is identified by a unique IP address. When an ESXi host establishes a control-plane connection with one member of the cluster, a full list of IP addresses for the other members is passed down to the host. This enables establishment of communication channels with all members of the controller cluster, allowing the ESXi host to know at any given time which specific node is responsible for any given logical network.
In the case of failure of a controller node, the slices owned by that node are reassigned to the remaining members of the cluster. In order for this mechanism to be resilient and deterministic, one of the controller nodes is elected as a master for each role. The master is responsible for allocating slices to individual controller nodes, determining when a node has failed, and reallocating the slices to the other nodes. The master also informs the ESXi hosts about the failure of the cluster node so that they can update their internal node ownership mapping.
The election of the master for each role requires a majority vote of all active and inactive nodes in the cluster. This is the primary reason why a controller cluster must always be deployed with an odd number of nodes.
                 
Above figure highlights the different majority number scenarios depending on the number of available controller nodes. In a distributed environment, node majority is required. During the failure of one the node, with only two nodes working in parallel, the majority number is maintained. If one of those two nodes were to fail or inter-node communication is lost (i.e., dual-active scenario), neither would continue to function properly. For this reason, NSX supports controller clusters with a minimum configuration of three nodes. In the case of second node failure the cluster will have only one node. In this condition controller reverts to read only mode. In this mode, existing configuration should continue to work however any new modification to the configuration is not allowed.
NSX controller nodes are deployed as virtual appliances from the NSX manager UI. Each appliance communicates via a distinct IP address. While often located in the same subnet as the NSX manager, this is not a hard requirement. Each appliance must strictly adhere to the specifications in below table.
|
Per Controller VM Configurations |
No. of Controller VMs |
vCPU |
Reservation |
Memory |
OS Disk |
3 |
4 |
2048 MHz |
4GB |
20 GB |
It is recommended to spread the deployment of cluster nodes across separate ESXi hosts. This ensure that the failure of a single host does not cause the loss of a majority number in the cluster. you can leverage the native vSphere anti-affinity rules to avoid deploying more than one controller node on the same ESXi server.
In the Next post we will learn how to deploy NSX controllers….:)
Like this:
Like Loading...