The deployment of overlay technologies has become very popular because of their capabilities in decoupling connectivity in the logical space from the physical network infrastructure. Devices connected to logical networks can leverage the entire set of network functions previously highlighted in Figure 5, independent of the underlying physical infrastructure configuration. The physical network effectively becomes a backplane used to transport overlay traffic.
This decoupling effect help solve many challenges traditional data center deployments are currently facing:
- Agile/Rapid Application Deployment: Traditional networking design is a bottleneck, preventing the rollout of new application at the pace that business is demanding. Overhead required to provision the network infrastructure in support of a new application often is counted in days if not weeks.
- Workload Mobility: Compute virtualization enables mobility of virtual workloads across different physical servers connected to the data center network. In traditional data center designs, this requires extension of L2 domains (VLANs) across the entire data center network infrastructure.This affects the overall network scalability and potentially jeopardizes the resiliency of the design.
- Large Scale Multi-Tenancy: The use of VLANs as a means of creating isolated networks limits the maximum number of tenants that can be supported (i.e., 4094 VLANs). While this value may currently be sufficient for typical enterprise deployments, it is becoming a serious bottleneck for many cloud providers.
Virtual Extensible LAN (VXLAN) has become the “de-facto” standard overlay technology and is embraced by multiple vendors; VMware in conjunction with Arista, Broadcom, Cisco, Citrix, Red Hat, and others developed it. Deploying VXLAN is key to building logical networks that provide L2 adjacency between workloads without the issues and scalability concerns found in traditional L2 technologies.
Configure VXLAN transport
The VXLAN network is used for Layer 2 logical switching across hosts and can potentially span multiple underlying Layer 3 domains. VXLAN is configured on a per-cluster basis, where each cluster that is to participate in NSX is mapped to a vSphere Distributed Switch. When mapping a cluster to a vSphere Distributed Switch, each host in that cluster is enabled for logical switches. The settings chosen here will be used in creating the VMkernel interface.
If logical routing and switching are needed, all clusters that have NSX VIBs installed on the hosts should also have VXLAN transport parameters configured. If you only plan to deploy a distributed firewall, it is not necessary to configure VXLAN transport parameters.
The teaming policy for VXLAN transport must be based on the topology of the physical switches. It is recommended that teaming policies are not mixed for different port groups on a vSphere Distributed Switch.
For certain teaming modes, VMware software creates multiple VTEPs to load balance traffic among the physical vNICs.The NSX Reference Design Guide contains a table with different teaming policy configuration options.
Now Lets configure VXLAN transport:
Log in to the vSphere Web Client and click Networking & Security.
Select Installation under the Networking & Security section and select the Host Preparation tab.
Click the Not Configured link in the VXLAN column of the appropriate cluster.
- From the Switch drop-down menu, select the appropriate vSphere Distributed Switch.
- In the VLAN text box, enter the VLAN number to be used for the VXLAN VTEP interfaces.
- Enter 1600 in the MTU text box.
- Select Use IP Pool.
- From the IP Pool drop-down menu, select the appropriate IP pool to use for the VTEP interface addressing.
- Select the appropriate VMKNic Teaming Policy from the drop-down menu.
- Note: the number of VTEPs is not editable in the UI. The VTEP number is set to match the number of dvUplinks on the vSphere Distributed Switch being prepared.
- Click OK.
this will prepare for virtual wires