VMware released both NSX for vSphere 6.3.o as well as vCenter 6.5a and ESXi 6.5a , we must ensure that before upgrade your NSX environment to 6.3 you would need to upgrade your vCenter and ESXi hosts to 6.5a as describe in the KB 2148841.
This version has introduced a lot of great new features and enhancements with just a few standouts listed below which appealed to me :
Controller Disconnected Operation (CDO) mode: A new feature called Controller Disconnected Operation (CDO) mode has been introduced. This mode ensures that data plane connectivity is unaffected when hosts lose connectivity with the controller.(i have seen few customers were affected due to this).
Controller Disconnected Operation (CDO) mode ensures that the data plane connectivity is unaffected when host lose connectivity with the controller. If you find issues with controller, you can enable CDO mode to avoid temporary connectivity issues with the controller.
You can enable CDO mode for each host cluster through transport zone. CDO mode is disabled by default.
When CDO mode is enabled, NSX Manager creates a special CDO logical switch, one for every transport zone. VXLAN Network Identifier (VNI) of the special CDO logical switch is unique from all other logical switches. When CDO mode is enabled, one controller in the cluster is responsible for collecting all the VTEPs information reported from all transport nodes, and replicate the updated VTEP information to all other transport nodes. When a controller fails, a new controller is elected as the new master to take the responsibility and all transport nodes connected to original master are migrated to the new master and data is synced between the transport nodes and the controllers.
If you add new cluster to transport zone, NSX Manager pushes the CDO mode and VNI to the newly added hosts. If you remove the cluster, NSX Manager removes the VNI data from the hosts.
When you disable the CDO mode on transport zone, NSX Manager removes CDO logical switch.
In Cross-vCenter NSX environment, you can enable CDO mode only on the local transport zones or in a topology where you do not have any local transport zones and have a single universal transport zone for the primary NSX Manager. The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers.
You can enable CDO mode on local transport zone for secondary NSX Manager.
Primary NSX Manager: You can enable CDO mode only on transport zones that do not share the same distributed virtual switch. If the universal transport zone and the local transport zones share the same distributed virtual switch, then CDO mode can be enabled only on the universal transport zone.
Secondary NSX Manager: The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers. You can enable CDO mode on local transport zones, if they do not share the same distributed virtual switch.
Cross-vCenter NSX Active-Standby DFW Enhancements:
- Multiple Universal DFW sections are now supported. Both Universal and Local rules can consume Universal security groups in source/destination/AppliedTo fields.
- Universal security groups: Universal Security Group membership can be defined in a static or dynamic manner. Static membership is achieved by manually adding a universal security tag to each VM. Dynamic membership is achieved by adding VMs as members based on dynamic criteria (VM name).
- Universal Security tags: You can now define Universal Security tags on the primary NSX Manager and mark for universal synchronization with secondary NSX Managers. Universal Security tags can be assigned to VMs statically, based on unique ID selection, or dynamically, in response to criteria such as antivirus or vulnerability scans.
Unique ID Selection Criteria: In earlier releases of NSX, security tags are local to a NSX Manager, and are mapped to VMs using the VM’s managed object ID. In an active-standby environment, the managed object ID for a given VM might not be the same in the active and standby datacenters. NSX 6.3.x allows you to configure a Unique ID Selection Criteria on the primary NSX Manager to use to identify VMs when attaching to universal security tags: VM instance UUID, VM BIOS UUID, VM name, or a combination of these options.
- Unique ID Selection Criteria
Use Virtual Machine instance UUID (recommended) – The VM instance UUID is generally unique within a VC domain, however there are exceptions such as when deployments are made through snapshots. If the VM instance UUID is not unique we recommend you use the VM BIOS UUID in combination with the VM name.■
Use Virtual Machine BIOS UUID – The BIOS UUID is not guaranteed to be unique within a VC domain, but it is always preserved in case of disaster. We recommend you use BIOS UUID in combination with VM name.■
Use Virtual Machine Name – If all of the VM names in an environment are unique, then VM name can be used to identify a VM across vCenters. We recommend you use VM name in combinationwith VM BIOS UUID.
- Unique ID Selection Criteria
Control Plane Agent (netcpa) Auto-recovery: An enhanced auto-recovery mechanism for the netcpa process ensures continuous data path communication. The automatic netcpa monitoring process also auto-restarts in case of any problems and provides alerts through the syslog server. A summary of benefits:
- automatic netcpa process monitoring.
- process auto-restart in case of problems, for example, if the system hangs.
- automatic core file generation for debugging.
- alert via syslog of the automatic restart event.
Log Insight Content Pack: This has been updated for Load Balancer to provide a centralized Dashboard, end-to-end monitoring, and better capacity planning from the user interface (UI).
Drain state for Load Balancer pool members: You can now put a pool member into Drain state, which forces the server to shutdown gracefully for maintenance. Setting a pool member to drain state removes the backend server from load balancing, but still allows the server to accept new, persistent connections.
4-byte ASN support for BGP: BGP configuration with 4-byte ASN support is made available along with backward compatibility for the pre-existing 2-byte ASN BGP peers.
Improved Layer 2 VPN performance: Performance for Layer 2 VPN has been improved. This allows a single Edge appliance to support up to 1.5 Gb/s throughput, which is an improvement from the previous 750 Mb/s.
Improved Configurability for OSPF: While configuring OSPF on an Edge Services Gateway (ESG), NSSA can translate all Type-7 LSAs to Type-5 LSAs.
DFW timers: NSX 6.3.0 introduces Session Timers that define how long a session is maintained on the firewall after inactivity. When the session timeout for the protocol expires, the session closes. On the firewall, you can define timeouts for TCP, UDP, and ICMP sessions and apply them to a user defined set of VMs or vNICs.
Linux support for Guest Introspection: NSX 6.3.0 enables Guest Introspection for Linux VMs. On Linux-based guest VMs, NSX Guest Introspection feature leverages fanotify and inotify capability provided by the Linux Kernel.
Publish Status for Service Composer: Service Composer publish status is now available to check whether a policy is synchronized. This provides increased visibility of security policy translations into DFW rules on the host.
NSX kernel modules now independent of ESXi version: Starting in NSX 6.3.0, NSX kernel modules use only the publicly available VMKAPI so that the interfaces are guaranteed across releases. This enhancement helps reduce the chance of host upgrades failing due to incorrect kernel module versions. In earlier releases, every ESXi upgrade in an NSX environment required at least two reboots to make sure the NSX functionality continued to work (due to having to push new kernel modules for every new ESXi version).
Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded to NSX 6.3.0, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This includes the NSX 6.3.x VIB install that is required after an ESXi upgrade, any NSX 6.3.x VIB uninstall, and future NSX 6.3.0 to NSX 6.3.x upgrades. Upgrading from NSX versions earlier than 6.3.0 to NSX 6.3.x still requires that you reboot hosts to complete the upgrade. Upgrading NSX 6.3.x on vSphere 5.5 still requires that you reboot hosts to complete the upgrade.
Happy learning 🙂
Pingback: NSX6.3 – Finally NSX supports vSphere 6.5 – VMTECHIE