Custom TCP/IP Stacks

vSphere at 5.1 and earlier, there was a single TCP/IP stack which was being used by all the different types of network traffics. This meant that management, virtual machine (VM) traffic, vMotion, NFC etc.. were all fixed to use the same stack. and because of the shared stack, all VMkernel interfaces had some things in common: they have to have the same default gateway, the same memory heap, and the same ARP and routing tables.

from vSphere 5.5 onward , VMware has changed this functionality, allowing for multiple TCP/IP stacks, but with some limitations. Only certain types of traffic could make use of a stack other than the default one, and the custom TCP stack had to be created from the command line using an ESXCLI command. 

In vSphere 6, VMware went ahead and created separate vMotion and Provisioning stacks by default , and when you deploy NSX , this will create its own stack.

NOTE – Custom TCP/IP stacks aren’t supported for use with fault tolerance logging, management traffic, Virtual SAN traffic, vSphere Replication traffic, or vSphere Replication NFC traffic.

Create a Custom TCP/IP Stack:

To create a new TCP/IP stack we must use the ESXCLI…

>esxcli network ip network add -N=”stack name”

1

2.GIF

Modify Custom TCP/IP Stack Settings:

Now since you have created your custom stack , it needs to be configured with settings like which DNS settings to use, which address to use as the default gateway etc. in advanced settings you can configure , which congestion control algorithm to use, or the maximum number of connections that can be active at any particular time.

To configure these settings, GO to Manage > Networking > TCP/IP configuration and highlight the your stack to be configured.

3.GIF

on the edit page , these settings can be modified:

  • Name. Change Name if Required.
  • DNS Configuration.use DHCP, so the custom TCP/IP stack can pickup the settings from DHCP or  static DNS servers with search domains.
  • Default Gateway. One of the primary reasons for creating a separate TCP/IP stack from the default one and this can be configured here.
  • Congestion Control Algorithm. The algorithm specified here affects vSphere’s response when congestion is suspected on the network stack.
  • Max Number of Connections. The default is set to 11,000.

45.GIF67.GIF

Configuring custom TCP/IP stacks also will help you have separate gateway for each kind of traffic if that’s the way network has been designed.

Learn NSX – Part-13 (Configure OSPF for DLR)

Configuring OSPF on a logical router enables VM connectivity across logical routers and from logical routers to edge services gateways (ESGs).

OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal cost.An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing tables. and an area is a logical collection of OSPF networks, routers, and links that have the same area identification. basically areas are identified by an Area ID.

Before we proceed with OSPF configuration , first on our deployed DLR, a Router ID must be configured , a to configure Router ID:

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3 – Double-click the distributed logical router on which to configure OSPF.

1.gif

4 – On the Manage tab:

  1. Select the Routing
  2. Select Global Configuration section from the options on the left.
  3. Under the Dynamic Routing Configuration section, click Edit.

2.gif

5 – In the Edit Dynamic Routing Configuration dialog box:

  1. Select an interface from the Router ID drop-down menu to use as the OSPF Router ID.
  2. Select the Enable Logging check box.
  3. Select Info from the Log Level drop-down menu.
  4. Click OK.

4

6 – Click Publish Changes.

5

Now you have configured Router ID , now let’s configure OSPF:

1 – Select the OSPF section from the options on the left:

  1. Under the Area Definitions section, click the green + icon to add an OSPF area.

6

2 – In the New Area Definition dialog box:

  1. Enter the OSPF area ID in the Area ID text box.
  2. Select the required OSPF area type from the Type drop-down menu:
    1. Normal
    2. NSSA, which prevents the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
    3. Select the required authentication type from the Authentication drop-down menu and enter a password, this is optional.
    4. Click OK.

7

3 – Under the Area to Interface Mapping section, click the green + icon.

8

4 – In the New Area to Interface Mapping dialog box:

  1. Select an appropriate uplink interface in the Interface drop-down menu.
  2. Select the OSPF area from the Area drop-down menu.
  3. Enter 1 in the Hello Interval text box.
  4. Enter 3 in the Dead Interval text box.
  5. Enter 128 in the Priority text box.
  6. Enter 1 in the Cost text box.
  7. Click OK.

9

5 – Under the OSPF Configuration section, click Edit.

  1. In the OSPF Configuration dialog box:
  2. Select the Enable OSPF check box.
  3. Enter the OSPF protocol address in the Protocol Address text box.
  4. Enter the OSPF forwarding address in the Forwarding Address text box.
  5. Select the Enable Graceful Restart check box for packet forwarding to be uninterrupted during restart of OSPF services.
  6. Select the Enable Default Originate check box to allow the NSX Edge to advertise itself as a default gateway to its peers .(optional)
  7. Click OK.

1010-1

6 – Click Publish Changes.

11

7 – Select the Route Redistribution section from the options on the left:

  1. Click Edit.
  2. Select OSPF.
  3. Click OK.

12.gif

12-1

8 – Under the Route Redistribution table section, click the green + In the New Redistribution criteria dialog box:

  1. Select the IP prefixes from the Prefix Name drop-down menu
  2. Select OSPF from the Learner Protocol drop-down menu.
  3. Select the necessary appropriate Allow Learning From
  4. Select Permit from the Action drop-down menu.
  5. Click OK.

13

9 – Click Publish Changes.

14

Our Lab topology will be as below , in my other posts , where i have created Logical switches , add one interface to this DLR as per below topology , this will allow VMs connected to both the logical switches will now be able to talk to each other because the logical router’s connected routes (172.16.10.0/24 and 172.16.20.0/24) are advertised into OSPF.

15

 

VMFS 6

New vSphere 6 .5  has been release with New File system after VMFS 5 released in 2011. Good news is that VMFS6 supports 512 Devices and 2048 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon.

Automatic Space Reclamation is the next feature that many of customers/administrators have been waiting for. basically this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. using Unmap storage capacity is reclaimed and released to the array so that when needed other volumes can use those blocks. Previously we need to run Unmap command to reclaim  the blocks, now this has been integrated in the UI and can be turned on or off from GUI.

SESparse will be the snapshot format supported on VMFS6, vSphere will not be supporting VMFSparse snapshot format in VMFS6, it will continue to be supported on VMFS5. Both VMFS 6 and VMFS 5 can co-exist but there is no straight forward  upgrade from VMFS5 to VMFS6. After you upgrade your ESXi hosts to version 6.5, you can continue using any existing VMFS5 datastores. To take advantage of VMFS6 features, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6 datastore. You cannot upgrade the VMFS5 datastore to VMFS6. right now SE Sparse is used primarily for View and for LUNs larger than 2TB. with VMFS 6 the default will be SE Sparse.

Comparison is as below:

Features and Functionalities VMFS5 VMFS6
Access for ESXi 6.5 hosts Yes Yes
Access for ESXi hosts version 6.0 and earlier Yes No
Datastores per Host 1024 1024
512n storage devices Yes (default) Yes
512e storage devices Yes. Not supported on local 512e devices. Yes (default)
Automatic space reclamation No Yes
Manual space reclamation through the esxcli command. Yes Yes
Space reclamation form guest OS Limited Yes
GPT storage device partitioning Yes Yes
MBR storage device partitioning Yes

For a VMFS5 data store that has been previously upgraded from VMFS3.

No
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes
Support of small files of 1 KB Yes Yes
Default use of ATS-only locking mechanisms on storage devices that support ATS. Yes Yes
Block size Standard 1 MB Standard 1 MB
Default snapshots VMFSsparse for virtual disks smaller than 2 TB.

SEsparse for virtual disks larger than 2 TB.

SEsparse
Virtual disk emulation type 512n 512n
vMotion Yes Yes
Storage vMotion across different datastore types Yes Yes
High Availability and Fault Tolerance Yes Yes
DRS and Storage DRS Yes Yes
RDM Yes Yes

Happy Learning 🙂

VMware vSphere Replication 6.5 released with 5 minute RPO

I was going through the release notes of vSphere Replication 6.5 and really surprised to see the RPO of this new release…and thought of worth sharing:

VMware vSphere Replication 6.5 provides the following new features:

  • 5-minute Recovery Point Objective (RPO) support for additional data store types – This version of vSphere Replication extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5. This allows customers to replicate virtual machine workloads with an RPO setting as low as 5-minutes between these various data store options.

surely  a great reason to upgrade to vSphere 6.5 and identify which all your VMs can work with 5 minute RPO and use this free replication feature of vSphere suite.

Creating IP Sets with NSX API Calls using PowerShell

Fed days back one of my close friend working with a customer , whose NSX environment is  crossvCenter NSX environment , because of this they have to create IP sets as IP Sets are the universal objects which get synchronized with secondary instance of NSX. since this IP Sets list was big so we were looking for automating using NSX API but don’t want to make many API calls manually , so we have decided to do it using PowerShell way and we decided to use Invoke-RestMethod , which was introduced in PowerShell 3.0

Invoke-RestMethod

The Invoke-RestMethod cmdlet sends HTTP and HTTPS requests to Representational State Transfer (REST) web services that returns richly structured data.Windows PowerShell formats the response based to the data type. For an RSS or ATOM feed, Windows PowerShell returns the Item or Entry XML nodes. For JavaScript Object Notation (JSON) or XML, Windows PowerShell converts (or deserializes) the content into objects.

CSV File Format used in Script:

1.GIF

Script to Create IP Sets from CSV is as below:

2

Result:

3

4

Finally ,using Invoke-RestMethod or Invoke-WebRequest , we can automate many of routine tasks of NSX using PowerShell. Hope this helps in your daily routine administrative tasks  :).

 

Learn NSX – Part-12 (Create NSX Edge Services Gateway)

Each NSX Edge virtual appliance can have a total of 10 uplink and internal network interfaces.Overlapping IP addresses are not allowed for internal interfaces, and overlapping subnets are not allowed for internal and uplink interfaces.

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3- Click the green + icon to add a new NSX Edge.

1

4- In the Name and description dialog box:

  1. Select Edge Services Gateway as the Install Type.
  2. Enter a name for the NSX Edge services gateway in the Name text box. The name should be unique across all NSX Edge services gateways within a tenant.
  3. Enter a host name for the NSX Edge services gateway in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Confirm that Deploy NSX Edge is selected (default).
  7. Select Enable High Availability to enable and configure high availability.
  8. Click Next.

2

5 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes. The password must be 12 to 255 characters and must contain the following:
    1. At least one upper case letter
    2. At least one lower case letter
    3. At least one number
    4. At least one special character
  3. Select the Enable SSH access check box.
  4. Select the Enable auto rule generation check box.
  5. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  6. Click Next.

3

6 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop-down menu.
  2. Select the appropriate Appliance Size.
  3. Click the green + icon in the NSX Edge Appliances
  4. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  5. Select the datastore from the Datastore drop-down menu.
  6. (Optional) Select the host from the Host drop-down menu.
  7. (Optional) Select the folder from the Folder drop-down menu.
  8. Click OK and click Next.

4-1

4-2

7 – In the Configure Interfaces dialog box:

  1. Under the Configure interfaces of this NSX Edge section, click the green + icon to create an interface.
    1. NOTE – You must add at least one internal interface for HA to work.
  2. Enter the NSX Edge interface name in the Name text box.
  3. Select Internal or Uplink as the
  4. Click Change next to the Connected To selection box to choose the appropriate logical switch, standard port group or distributed port group with which to connect the interface.
  5. Select Connected for the Connectivity Status.
  6. Assign a primary IP address and subnet prefix length.
  7. Select the appropriate options.
  8. Select Enable Proxy ARP for overlapping network forwarding between different interfaces. Select Send ICMP Redirect to convey routing information to hosts.
  9. Click OK and click Next.

5-15-2.gif

7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.

6

8 – In the Firewall and HA dialog box:

  1. Select the Configure Firewall default policy check box.
  2. Select Accept for Default Traffic Policy.
  3. Select Disable for Logging.
  4. (Optional) If high availability is enabled, complete the Configure HA parameters By default, HA automatically chooses an internal interface and automatically assigns link-local IP addresses.
  5. Click Next.

7

NOTE – If ANY is selected for the high availability interface but there are no internal interfaces configured, the user interface will not display an error. Two NSX Edge appliances will be created, but because there is no internal interface configured, the new NSX Edge appliances remain in standby and high availability is disabled. After an internal interface is configured, high availability will be enabled on the NSX Edge appliances.

9 – In the Ready to complete dialog box, review the configuration and click Finish.

8

Happy Learning 🙂

VM Component Protection (VMCP)

vSphere 6.0 has a powerful  feature as a part of vSphere HA called VM Component Protection (VMCP). VMCP protects virtual machines from storage related events, specifically Permanent Device Loss (PDL) and All Paths Down (APD) incidents.

Permanent Device Loss (PDL)
A PDL event occurs when the storage array issues a SCSI sense code indicating that the device is unavailable. A good example of this is a failed LUN, or an administrator inadvertently removing a WWN from the zone configuration. In the PDL state, the storage array can communicate with the vSphere host and will issue SCSI sense codes to indicate the status of the device. When a PDL state is detected, the host will stop sending I/O requests to the array as it considers the device permanently unavailable, so there is no reason to continuing issuing I/O to the device.

All Paths Down (APD)
If the vSphere host cannot access the storage device, and there is no PDL SCSI code returned from the storage array, then the device is considered to be in an APD state. This is different than a PDL because the host doesn’t have enough information to determine if the device loss is temporary or permanent. The device may return, or it may not. During an APD condition, the host continues to retry I/O commands to the storage device until the period known as the APD Timeout is reached. Once the APD Timeout is reached, the host begins to fast-fail any non-virtual machine I/O to the storage device. This is any I/O initiated by the host such as mounting NFS volumes, but not I/O generated within the virtual machines. The I/O generated within the virtual machine will be indefinitely retried. By default, the APD Timeout value is 140 seconds and can be changed per host using the Misc.APDTimeout advanced setting.

VM Component Protection (VMCP)
vSphere HA can now detect PDL and APD conditions and respond according to the behavior that you configure. The first step is to enable VMCP in your HA configuration. This settings simply informs the vSphere HA agent that you wish to protect your virtual machines from PDL and APD events.

Cluster Settings -> vSphere HA -> Host Hardware Monitoring – VM Component Protection -> Protect Against Storage Connectivity Loss.

1.GIF

The next step is configuring the way you want vSphere HA to respond to PDL and ADP events.  Each type of event can be configured independently.  These settings are found on the same window that VMCP is enabled by expanding the Failure conditions and VM response section.

2

What setting and their explanation are available on this Blog.

Storage Protocol Comparison

Many of my friends was looking for something easy to understand Storage  Protocol Comparison , here is what i had created some time back for my reference:

  iSCSI NFS FIBRE CHANNEL FCOE
Description
iSCSI presents block devices to a VMware® ESXi™ host. Rather than accessing blocks from a local disk, I/O operations are carried out over a network using a block access protocol. In the case of iSCSI, remote blocks are accessed by encapsulating SCSI commands and data into TCP/IP packets.
NFS presents file devices over a network to an ESXi host for mounting. The NFS server/array makes its local file systems available to ESXi hosts. ESXi hosts access the metadata and files on the NFS array/server, using an RPC-based protocol.
Fibre Channel (FC) presents block devices similar to iSCSI. Again, I/O operations are carried out over a network, using a block access protocol. In FC, remote blocks are accessed by encapsulating SCSI commands and data into FC frames. FC is commonly deployed in the majority of mission-critical environments.
Fibre Channel over Ethernet (FCoE) also presents block devices, with I/O operations carried out over a network using a block access protocol. In this protocol, SCSI commands and data are encapsulated into Ethernet frames. FCoE has many of the same characteristics as FC, except that the transport is Ethernet.
Implementation Options
Network adapter with iSCSI capabilities, using software iSCSI initiator and accessed using a VMkernel (vmknic) port.               or:                                                      • Dependent hardware iSCSI initiator.                                          or:                                                         • Independent hardware iSCSI initiator.
Standard network adapter, accessed using a VMkernel port (vmknic).
Requires a dedicated host bus adapter (HBA) (typically two, for redundancy and multipathing).
• Hardware converged network adapter (CNA).                                                 • Network adapter with FCoE capabilities, using software FCoE initiator.
Performance Considerations
iSCSI can run over a 1Gb or a 10Gb TCP/IP network. Multiple connections can be multiplexed into a single session, established between the initiator and target. VMware supports jumbo frames for iSCSI traffic, which can improve performance. Jumbo frames send payloads larger than 1,500.
NFS can run over 1Gb or 10Gb TCP/IP networks. NFS also supports UDP, but the VMware implementation requires TCP. VMware supports jumbo frames for NFS traffic, which can improve performance in certain situations.
FC can run on F71Gb/2Gb/4Gb/8Gb and 16Gb HBAs,This protocol typically affects a host’s CPU the least,
because HBAs (required for FC)
handle most of the processing
(encapsulation of SCSI data into
FC frames).
This protocol requires 10Gb Ethernet. With FCoE, there is no IP encapsulation of the data as there is with NFS and iSCSI. This reduces some of the overhead/latency. FCoE is SCSI over Ethernet, not IP. This protocol also requires jumbo frames, because FC payloads are 2.2K in size and cannot be fragmented.
Error Checking
iSCSI uses TCP, which resends dropped packets.
NFS uses TCP, which resends dropped packets.
FC is implemented as a lossless network. This is achieved by throttling throughput at times of congestion, using B2B and E2E credits.
FCoE requires a lossless network. This is achieved by the implementation of a pause frame mechanism at times of congestion.
Security
iSCSI implements the Challenge Handshake Authentication Protocol (CHAP) to ensure that initiators and targets trust each other. VLANs or private networks are highly recommended, to isolate the iSCSI traffic from other traffic types.
VLANs or private networks are highly recommended, to isolate the NFS traffic from other traffic types
Some FC switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. VSANs are conceptually similar to VLANs. Zoning between hosts and FC targets also offers a degree of isolation.
Some FCoE switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. Zoning between hosts and FCoE targets also offers a degree of isolation
ESXi Boot from SAN
Yes
No
Yes
Software FCoE – No Hardware FCoE (CNA) – Yes
Maximum Device Size
64TB
Refer to NAS array vendor or NAS server vendor for maximum supported datastore size. Theoretical size is much larger than 64TB but requires NAS vendor to support it.
64TB
64TB
Maximum Number of Devices
256
Default: 8 Maximum: 256
256
256
Storage vMotion Support
Yes
Yes
Yes
Yes
Storage DRS Support
Yes
Yes
Yes
Yes
Storage I/O Control Support
Yes
Yes
Yes
Yes
Virtualized MSCS Support
No. VMware does not support MSCS nodes built on virtual machines residing on iSCSI storage.
No. VMware does not support MSCS nodes built on virtual machines residing on NFS storage.
Yes. VMware supports MSCS nodes built on virtual machines residing on FC storage
No. VMware does not support MSCS nodes built on virtual machines residing on FCoE storage.

This has been taken from VMware’s Storage Protocol Comparison White Paper , please check this link for more details.

Learn NSX – Part-11 (Create Distributed Logical Router)

In the Previous post, We have discussed about creating logical switches and now workloads have L2 adjacency across IP subnets with the help of VXLAN. In this post, we are going to enable routing between multiple Logical switches.

Topology:

12

An NSX Edge logical router provides routing and bridging functionality. With distributed routing, virtual machines that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface.

Prerequisites before deploying DLR

  • At least one deployed NSX Controller node
  • At least one logical switch

NOTE – A DLR router instance cannot be connected to logical switches that exist in different transport zones.

1 – Log in to the vSphere Web Client and click Networking & Security.

2 – Select NSX Edges under the Networking & Security section.

1

Click the green + icon to add a new NSX Edge.

2.gif

3 – In the Name and description dialog box:

  1. Select Logical (Distributed) Router as the Install Type.
  2. Enter the name in the Name text box. This name appears in your vCenter inventory and should be unique across all logical routers within a single tenant.
  3. Enter a host name for the distributed logical router in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Deploy Edge Appliance is selected by default. An NSX Edge appliance is required for dynamic routing and the logical router. (This is only required only if you want to dynamic routing and firewalling).NOTE – An NSX Edge appliance cannot be added to the logical router after the logical router has been created.
  7. Select Enable High Availability to enable and configure high availability. High availability is required for dynamic routing.
  8. Click Next.

3

4 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes.
  3. Select the Enable SSH access check box.
  4. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  5. Click Next.

4

5 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop down list.
  2. Click the green + icon in the NSX Edge Appliances
  3. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  4. Select the datastore from the Datastore drop-down menu.
  5. Select the host from the Host drop down list (Optional).
  6. Select the folder from the Folder drop-down menu (Optional).
  7. Click OK and click Next.

5

6

6 – In the Configure Interfaces dialog box:

  1. Under the HA Interface Configuration section, click Select next to the Connected To selection box to choose the appropriate logical switch or distributed port group. Generally, this interface should be connected to the management distributed port group.
  2. Under the Configure interfaces of this NSX Edge section, click the green + icon to create a logical interface.
  3. Enter the logical router interface name in the Name text box.
  4. Select the Internal or Uplink radio button as the Type.
  5. Click Change next to the Connected To selection box to choose the appropriate logical switch with which to connect the interface.
  6. Select Connected for Connectivity Status.
  7. Under the Configure subnets section, click the green + icon and assign an IP address and subnet prefix length. Click OK.
  8. Repeat steps b through g for all interfaces to be created.
  9. Click OK and click Next.

7

89

7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.

10

8 – In the Ready to complete dialog box, review the configuration and click Finish.

11

NOTE – for HA interface do not use an IP address that exists somewhere else on your network, even if that network is not directly attached to the NSX Edge.

Happy Learning 🙂

vCenter 6.5 HA Architecture Overview

A vCenter HA cluster consists of three vCenter Server Appliance instances. The first instance, initially used as the Active node and is cloned twice to a Passive node and to a Witness node. Together, the three nodes provide an active-passive fail over solution.

Deploying each of the nodes on a different ESXi instance protects against hardware failure. Adding the three ESXi hosts to a DRS cluster can further protect your environment.When vCenter HA configuration is complete, only the Active node has an active management interface (public IP). The three nodes communicate over a private network called vCenter HA network that is set up as part of configuration. The Active node and the Passive node are continuously replicating data.

vcenter_ha

All three nodes are necessary for the functioning of this feature. Compare the node responsibilities.

Active Node:

Runs the active vCenter Server Appliance instance
Uses a public IP address for the management interface
Uses the vCenter HA network for replication of data to the Passive node.
Uses the vCenter HA network to communicate with the Witness node.

Passive Node:

Is initially a clone of the Active node.
Constantly receives updates from and synchronizes state with the Active node over the vCenter HA network.
Automatically takes over the role of the Active node if a failure occurs.

Witness Node:

Is a lightweight clone of the Active node.
basically works as a quorum to protect against a split-brain situations.

Note:

  • vCenter HA network latency between Active, Passive, and Witness nodes must be less than 10 ms.
  • The vCenter HA network must be on a different subnet than the management network.
  • vCenter Server 6.5 is required.

I hope this will help you to plan your next vSphere upgrade with vCenter High Availability.

Happy Diwali 🙂