Monthly Archives: March 2017

NSX Controllers ?

In an NSX for vSphere environment, basically the management plane is responsible for providing the GUI interface and the REST API entry point to manage the NSX environment.

Control Plane

The control plane includes a three node cluster running the control plane protocols required to capture the system configuration and push it down to the data plane and data plane consists of VIB modules installed in the hypervisor during host preparation.

NSX Controller stores the following types of tables:

  • VTEP table -keeps track of what virtual network (VNI) is present on which VTEP/hypervisor.
  • MAC table – keeps track of VM MAC to VTEP IP mappings.
  • ARP table – keeps track of VM IP to VM MAC mappings.

Controllers maintain the routing information by distributing the routing data learned from the control VM to each routing kernel module in the ESXi hosts. The use of the controller cluster eliminates the need for multicast support from the physical network infrastructure. Customers no longer have to provision multicast group IP addresses.  They also no longer need to enable PIM routing or IGMP snooping features on physical switches or routers. Logical switches need to be configured in unicast mode to avail of this feature.

NSX Controllers support an ARP suppression mechanism that reduces the need to flood ARP broadcast requests across the L2 network domain where virtual machines are connected. This is achieved by converting the ARP broadcasts into Controller lookups. If the controller lookup fails, then normal flooding will be used.

The ESXi host, with NSX Virtual Switch, intercepts the following types of traffic:

  • Virtual machine broadcast
  • Virtual machine unicast
  • Virtual machine multicast
  • Ethernet requests
  • Queries to the NSX Controller instance to retrieve the correct response to those requests

Each controller node is assigned a set of roles that define the tasks it can implement. By default, each controller is assigned all the following roles:

  • API Provider:  Handles HTTP requests from NSX Manager
  • Persistence Server: Persistently stores network state information
  • Logical Manager: Computes policies and network topology
  • Switch Manager: Manages the hypervisors and pushes configuration to the hosts
  • Directory Server: Manages VXLAN and distributed logical routing information

One of the controller nodes is elected as a leader for each role.so may be controller 1 elected as the leader for the API Provider and Logical Manager.controller 2 as the leader for Persistence Server and Directory Server and controller 3 has been elected as the leader for the Switch Manager role.

The leader for each role is responsible for allocating tasks to all individual nodes in the cluster. This is called slicing and slicing is being used to increase the scalability characteristics of the NSX architecture , slicing ensure that all the controller nodes can be active at any given time

123

The leader of each role maintains a sharding db table to keep track of the workload. The sharding db table is calculated by the leader and replicated to every controller node. It is used by both VXLAN and distributed logical router, known as DLR. The sharding db table may be recalculated at cluster membership changes, role master changes, or adjusted periodically for rebalancing.

In case of the failure of a controller node, the slices for a given role that were owned by the failed node are reassigned to the remaining members of the cluster.  Node failure triggers a new leader election for the roles originally led by the failed node.

Control Plane Interaction

  • ESXi hosts and NSX logical router virtual machines learn network information and send it to NSX Controller through UWA.
  • The NSX Controller CLI provides a consistent interface to verify VXLAN and logical routing network state information.
  • NSX Manager also provides APIs to programmatically retrieve data from the NSX Controller nodes in future.

7

Controller Internal Communication

The Management Plane communicates to the Controller Cluster over TCP/443.The Management Plane communicates directly with the vsfwd agent in the ESXi host over TCP/5671 using RabbitMQ, to push down firewall configuration changes.

The controllers communicates to the netcpa agent running in the ESXi host over TCP/1234 to propagate L2 and L3 changes. Netcpa then internally propagates these changes to the respective routing and VXLAN kernel modules in the ESXi host. Netcpa also acts as a middleman between the vsfwd agent and the ESXi kernel modules.

NSX Manager chooses a single controller node to start a REST API call. Once the connection is established, the NSX Manager transmits the host certificate thumbprint, VNI and logical interface information to the NSX Controller Cluster.

All the date transmitted by NSX Manager can be found in the file config-by-vsm.xml in the directory /etc/vmware/netcpa on the ESXi host. File /var/log/netcpa.log, can be helpful in troubleshooting the communication path between the NSX Manager, vsfwd and netcpa.

Netcpa randomly chooses a controller to establish the initial connection that is called core session and thsi core session is used to transmit the Controller Sharding table to the hosts, so they are aware of who is responsible for a particular VNI or routing instance.

5

Hope this helps you in understanding NSX Controllers.Happy Learning 🙂

Advertisements

Learn VMwareCloudFoundation – Part01

Starting a series of posts on Learning VMware Cloud Foundation , here is the first in the series…

Basically, it brings together vSphere, VSAN, and NSX into the next generation hyper-converged platform. product that brings all of these components together in an easy to deploy and consume manner is a new product called the SDDC Manager. SDDC Manager allows you to consume the entire stack as a unified single entity.

VMware Cloud Foundation can also be consumed both on premise on qualified hardware, and as a service from cloud partners,like IBM Soft Layer . Customer can now build a true hybrid cloud, linking the private and public cloud through this unified and common foundation across both environments.

1.JPG

Why to Choose VCF

Standard design – virtualization components like vSphere ESXi , vSAN , NSX   and management components like vCenter , NSX Manager , Controllers  are automatically deployed and configured according to a validated datacenter architecture based on  best practices. This will eliminate enterprises lengthy planning cycles for vSphere , NSX , vSAN , vROps , LI  design and deployment.

Fully Integrated stack – with VCF the VMware virtualization components like ESXi, vSAN, NSX and the management software like vCenter, LogInsight, vROps, SDDC Manager are combined into a single cloud infrastructure platform, that basically eliminating the need to rely on complex interop matrixes.

Automates Hardware and Software Bring-Up – Cloud Foundation automates the installation of the entire VMware software stack. Once the rack is installed and powered on and the networking is in place, SDDC Manager leverages its knowledge of the hardware
bill of materials and user-provided environmental information (e.g. DNS, IP address pool, etc.) to initialize the rack. Time savings varies by customer, but software installation time is estimated to be reduced from several weeks to as little as two hours due to the automation of certain previously manual functions. These include provisioning workloads, including automated provisioning of networks, allocation of resources based on service needs, and provisioning of end points. When the process completes, the customer has a
virtual infrastructure ready to start deploying vSphere clusters and provisioning workloads.

Lifecycle management automation – Data center upgrades and patch management are typically manual, repetitive tasks that are prone to configuration and implementation errors. Validation testing of software and hardware firmware to ensure interoperability among components when one component is patched or upgraded requires extensive quality assurance testing in staging environments.

SDDC Manger provides built-in capabilities to automate the bring up, configuration, provisioning and patching/upgrades of the cloud infrastructure.Lifecycle management in SDDC Manager can be applied to the entire infrastructure or to specific workload domains and is designed to be non-disruptive to tenant virtual machines (VMs).

2.jpg

Integrates Management of Physical and Virtual Infrastructure – SDDC Manager understands the physical and logical topology of the software defined data center and the underlying components’ relation to each other, and efficiently monitors the infrastructure to detect potential risks, degradations and failures. SDDC Manager provides stateful alert management to prevent notification spam on problem detection. Each notification includes a clear description of the problem and provides remediation actions needed to restore service.

Components of VCF –

3

Physical Architecture of VCF

A VCF instance starts with a single rack, and scales up to 8 racks, each containing up to 32 hosts per rack. This gives us a total of 256 hosts per VCF instance. Each rack contains two top-of-rack (ToR) switches and a management switch, and racks 2-8 are connected to spine switches in the first or second rack. A VMware Cloud Foundation private cloud deployment is comprised of between one to eight physical racks. Each rack contains between 4 to 32 vSAN Ready Nodes, one management switch, and two Top-of-Rack (ToR) switches.In multi-rack configurations, a pair of redundant spine switches are added to provide for inter-rack connectivity.

4.jpg

Spine Switches

The Cloud Foundation system contains two spine switches. These switches extend the network fabric of the top of rack (ToR) switches between racks and are used for inter-rack connectivity only. The hardware vendor connects the available uplink ports of the ToR switches to the spine switches.
Spine switches are required only in multi-rack installations of Cloud Foundation and are placed in the second rack.

Management Switch

The management switch provides Out-Of-Band (OOB) connectivity to the baseboard management controller (BMC) on each server.The management network fabric does not carry vSphere management, vSAN, or vMotion traffic. That traffic resides on the network fabric created by the TOR and spine switches. As a result the management switch is a non-redundant component in the physical rack. If this switch goes down, some functionality such as monitoring may not be available until it comes back up. Workloads will continue to run, but the infrastructure associated with them cannot be modified or controlled.

Open Hardware Management System (OHMS) – On each management switch OHMS is running which is recently made open source. This is a Java runtime software agent that is invoked to manage physical hardware across the racks. SDDC Manager will communicate with OHMS to configure switches and hosts (Cisco API, CIMC, Dell, etc.). VMware has developed plugins for Arista and Cisco, but now this is open-source vendors can write their own plugins for other hardware platforms.

Top of Rack Switches

A physical rack contains two top of rack (ToR) switches, each of which has 48 10GE ports and at least 4*40GEuplink ports. The ToR and spine switches carry all network traffic from the servers including VM network,VM management, vSAN, and vMotion traffic. On rack 1 in a multi-rack Cloud Foundation, the ToRs also carry traffic to the enterprise network via two of the uplink ports. The ToR switches provide higher bandwidth as well as redundancy for continued operation in case one of the ToR switches goes down.If the installation has spine switches, two uplink ports from each ToR switch on each rack are connected to each spine switch.

Servers

A physical rack must contain a minimum of four dual-socket 1U. You can incrementally add servers to the rack up to a maximum of 32 servers.All servers within a rack must be of the same model and type. The disk size and storage configuration must be identical as well. Memory and CPU (e.g. per CPU core count) between servers can vary.

Management Domain

SDDC Manager configures the first four servers in each physical rack into an entity called the management domain. After you deploy Cloud Foundation, you can expand the management domain.The management domain manages the hosts in that rack. All disk drives are claimed by vSAN.The management domain contains the following:
vCenter Server Appliance(including both vCenter Server and Platform Services Controller as separate VMs) managing the vSphere cluster with HA and DRS enabled and the following VMs:
NSX Manager
vRealize Operations
vRealize Log Insight
SDDC Manager

5

Physical Network Connectivity

All hosts in a physical rack are connected to both the two ToR switches with 10Gb links. On each host, NIC port 1 is connected to ToR switch 1 and NIC port 2 is connected to ToR switch 2 with Link Aggregation (LAG).
The BMC on each host is connected to the management switch over a 1G connection. This connection is used for OOB management. Both ToR switches are further connected to a pair of spine switches in a dual-LAG configuration using 40 G links. The spine switches are an aggregation layer for connecting multiple racks.

Physical Storage Connectivity

The primary source of storage for Cloud Foundation is vSAN. All disks are claimed by vSAN for storage.
The amount of available physical storage in workload domains depends on the number of physical hosts.Storage traffic is carried over the 10Gbps links between the hosts and ToR switches. All vSAN member hosts communicate over this 10Gbps network.

vSphere Network I/O Control (NIOC) can be enabled to allow network resource management to use network resource pools to prioritize network traffic by type.

This covers Hardware Architecture of the VMware Cloud Foundation , Next i will be covering software components of VCF. till the time Happy Learning 🙂

 

 

%d bloggers like this: