Many of my friends was looking for something easy to understand Storage Protocol Comparison , here is what i had created some time back for my reference:
iSCSI | NFS | FIBRE CHANNEL | FCOE | |
Description |
iSCSI presents block devices to a VMware® ESXi™ host. Rather than accessing blocks from a local disk, I/O operations are carried out over a network using a block access protocol. In the case of iSCSI, remote blocks are accessed by encapsulating SCSI commands and data into TCP/IP packets. |
NFS presents file devices over a network to an ESXi host for mounting. The NFS server/array makes its local file systems available to ESXi hosts. ESXi hosts access the metadata and files on the NFS array/server, using an RPC-based protocol. |
Fibre Channel (FC) presents block devices similar to iSCSI. Again, I/O operations are carried out over a network, using a block access protocol. In FC, remote blocks are accessed by encapsulating SCSI commands and data into FC frames. FC is commonly deployed in the majority of mission-critical environments. |
Fibre Channel over Ethernet (FCoE) also presents block devices, with I/O operations carried out over a network using a block access protocol. In this protocol, SCSI commands and data are encapsulated into Ethernet frames. FCoE has many of the same characteristics as FC, except that the transport is Ethernet. |
Implementation Options |
Network adapter with iSCSI capabilities, using software iSCSI initiator and accessed using a VMkernel (vmknic) port. or: • Dependent hardware iSCSI initiator. or: • Independent hardware iSCSI initiator. |
Standard network adapter, accessed using a VMkernel port (vmknic). |
Requires a dedicated host bus adapter (HBA) (typically two, for redundancy and multipathing). |
• Hardware converged network adapter (CNA). • Network adapter with FCoE capabilities, using software FCoE initiator. |
Performance Considerations |
iSCSI can run over a 1Gb or a 10Gb TCP/IP network. Multiple connections can be multiplexed into a single session, established between the initiator and target. VMware supports jumbo frames for iSCSI traffic, which can improve performance. Jumbo frames send payloads larger than 1,500. |
NFS can run over 1Gb or 10Gb TCP/IP networks. NFS also supports UDP, but the VMware implementation requires TCP. VMware supports jumbo frames for NFS traffic, which can improve performance in certain situations. |
FC can run on F71Gb/2Gb/4Gb/8Gb and 16Gb HBAs,This protocol typically affects a host’s CPU the least,because HBAs (required for FC)handle most of the processing(encapsulation of SCSI data intoFC frames). |
This protocol requires 10Gb Ethernet. With FCoE, there is no IP encapsulation of the data as there is with NFS and iSCSI. This reduces some of the overhead/latency. FCoE is SCSI over Ethernet, not IP. This protocol also requires jumbo frames, because FC payloads are 2.2K in size and cannot be fragmented. |
Error Checking |
iSCSI uses TCP, which resends dropped packets. |
NFS uses TCP, which resends dropped packets. |
FC is implemented as a lossless network. This is achieved by throttling throughput at times of congestion, using B2B and E2E credits. |
FCoE requires a lossless network. This is achieved by the implementation of a pause frame mechanism at times of congestion. |
Security |
iSCSI implements the Challenge Handshake Authentication Protocol (CHAP) to ensure that initiators and targets trust each other. VLANs or private networks are highly recommended, to isolate the iSCSI traffic from other traffic types. |
VLANs or private networks are highly recommended, to isolate the NFS traffic from other traffic types |
Some FC switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. VSANs are conceptually similar to VLANs. Zoning between hosts and FC targets also offers a degree of isolation. |
Some FCoE switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. Zoning between hosts and FCoE targets also offers a degree of isolation |
ESXi Boot from SAN |
Yes |
No |
Yes |
Software FCoE – No Hardware FCoE (CNA) – Yes |
Maximum Device Size |
64TB |
Refer to NAS array vendor or NAS server vendor for maximum supported datastore size. Theoretical size is much larger than 64TB but requires NAS vendor to support it. |
64TB |
64TB |
Maximum Number of Devices |
256 |
Default: 8 Maximum: 256 |
256 |
256 |
Storage vMotion Support |
Yes |
Yes |
Yes |
Yes |
Storage DRS Support |
Yes |
Yes |
Yes |
Yes |
Storage I/O Control Support |
Yes |
Yes |
Yes |
Yes |
Virtualized MSCS Support |
No. VMware does not support MSCS nodes built on virtual machines residing on iSCSI storage. |
No. VMware does not support MSCS nodes built on virtual machines residing on NFS storage. |
Yes. VMware supports MSCS nodes built on virtual machines residing on FC storage |
No. VMware does not support MSCS nodes built on virtual machines residing on FCoE storage. |
This has been taken from VMware’s Storage Protocol Comparison White Paper , please check this link for more details.