Category: vSphere

vSphere 6.5 Encryption using HyTrust KeyControl 4.1 – Part-2

In the Part-1 we configured HyTrust KeyControl Cluster , now lets configure this cluster in vCenter and configure encryption for Virtual Machines..

Lets create a user to be utilize with vCenter –  click on the Users tab to create a new user. Click on the Actions drop down button and select, Create User.

1.png

Create a user called the same name as the VMware VCSA name for ease of use.

NOTE – Do NOT specify a password, else trust will fail.

2.png

Highlight the newly created user, click the Actions dropdown button, then click the Download Certificate option. This will download the certificate created for that user. A zip file containing the Certificate of Authority (CA) will be downloaded.

3.png

Once you have downloaded Certificate , Log in to the VCSA, highlight the vCenter on the left hand pane, click on the configure tab on the right hand pane, click on Key Management Servers, then click the Add KMS button.

4.png

Enter a Cluster name, Server Alias, Fully Qualified Domain Name (FQDN)/IP of the server, and the port number. Leave the other fields as the default, then click OK.

5.png

Click on Yes to set the KMS cluster “Hytrust” as the default.

6.png

Click on Trust to trust the Certificate from HyTrust KeyControl.

7.png

Now we have to establish the trust relationship between vCenter and HyTrust KeyControl. Highlight the KeyControl appliance, click on All Actions, then click on Establish trust with KMS.

8.png

Select the Upload certificate and private key option, then click OK.

9.png

Click on Upload file button

10

Browse to where the CA file was previously generated, select the “vcenter”.pem file, then click Open.

11.png

Repeat the process for the private key by clicking on the second Upload file button and Verify that both fields are populated with the same file, then click OK.

13.png

You will now see that the Connection status is shown as Normal indicating that trust has been established.Hytrust KeyControl is now set up as the Key Management Sever (KMS) for vCenter.

14.png

Now we successfully add one Node of cluster , add another node by following the same steps..

15.png

Let’s You can now begin to encrypting virtual machines with vSphere 6.5 which i will be covering in the next post. Happy Learning 🙂

 

 

Advertisements

vSphere 6.5 Encryption using HyTrust KeyControl 4.1 – Part-1

HyTrust KeyControl supports a fully functional KMIP server that can be deployed as a vSphere Key Management Server and once deployment is completed and a trusted connection between KeyControl and vSphere has been established, KeyControl can manage the encryption keys for virtual machines in the cluster that have been encrypted with vCenter Server for vSphere Virtual Machine Encryption or VMware VSAN Encryption.

In this post we will deploy HyTrust KeyControl KMS server and setup KMS Cluster

There are two methods for installation of Key Control… either we can use OVA appliance or another Method to use ISO. in this Post we will use OVA method..

Open your vSphere Web Client and Click on “Deploy OVF Template”.1.png

Choose OVF2

Provide Name for the HyTrust KeyControl Appliance, select a deployment location, then click Next.3.png

Select the vSphere cluster or host Where you would like to install the HyTrust KeyControl appliance on, then click Next.

4.png

Review the details, then click Next.

5.png

Select the proper configuration from the drop down menu, then click Next.

6.png

Select the preferred storage and disk format for the KeyControl appliance, then click Next.

7.png

Select the appropriate network, enter appropriate network details then click Next.

9.png

Review the summary screen, if everything is correct, click Finish.

10.png

11

Appliance deployment is successfully completed.. since i am going to setup a cluster , so i would go ahead and deploy another appliance using the same procedure…

One Both the appliance has been deployed , power on the newly created HyTrust KeyControl appliance, then open a console to the KeyControl appliance. Set the system password, then press OK.

12.png

Since this is the First Node ,Select No, then press enter.

13.png

Review the Appliance Configuration, then press OK.

14.png

Now First KeyControl appliance is configured and you can now move to the KeyControl WebGUI. Open a web browser and navigate to the IP or FQDN of the KeyControl appliance. Use the the following credentials to initially log in:
Username: secroot
Password: secroot

15.png

After login , read and accept the EULA by clicking on I Agree at the bottom of the agreement.

16.png

Enter a new password for the secroot account, then click Update Password.

17

Now we successful setup our first Node..

18

Setup Cluster

Power on the second appliance and follow all the steps as above , except.. Click “YES” Here.

19.png

This will take us to the process of Cluster process..

20.png

Enter the IP address of First Node.

21.png

The final piece of information required is the passphrase. We would require a minimum of 16 characters.

25.png

The node must now be authenticated through the webGUI, as the following message indicates:

23

At this point you need to log on to the webGUI console of First Node with Administration privileges. The new KeyControl node will automatically appear as an unauthenticated node in the KeyControl cluster, as shown below:

26.png

To authenticate this new node, click the Actions Button and then click Authenticate. This will take you to the authentication screen shown below. You are prompted to enter the Authentication Passphrase.

2728.png

On the new KeyControl’s console, you will see a succession of status messages, as shown below:

30.png

Once authentication completes, the KeyControl node is listed as Authenticated but Unreachable until cluster synchronization completes and the cluster is ready for use. This should not take more than a minute or two. Then it will show as Authenticated and Online.Once the KeyControl node is available, the status will automatically move to Online and the cluster status at the top right of the screen will change back to Healthy.

31.png

At this point, the new cluster/node is ready to use.

Now Click on the KMIP button on the toolbar to configure the KMIP.

32.png

Enable KMIP by changing the state from disabled to enabled, then click save, then click Apply.NOTE: Take note of the port number 5696 and have it handy. You will specify this port number in the vCenter\VCSA configuration, later on.
33.png

Now we have successfully setup KMS Cluster.34.png

In the next post , we will use this cluster for vSphere to use as KMS server. Happy Learning 🙂

 

 

VCAP6-DCV Design Exam Experience

Yesterday sat for VMware Certified Advanced Professional 6 — Data Center Virtualization Design exam and really happy to share that i have passed this difficult exam.

In general, the exam is 175 minutes where you have to complete 18 questions. since India being an non-native English speaking country you will get 30 Minutes extra. exam was having around 5 or 6 design type questions compare to a single master question in 5.5.

People who are preparing for this exam , you must focus on design attribute like Availability , Manageability , Performance , Recoverability and Security , Functional and non-functional requirements , assumptions , constrains and risks.

I have referred below documents for the preparation:

VCAP6-Design Blueprint and all associated documents.

Function vs non-functional – https://communities.vmware.com/servlet/JiveServlet/downloadBody/17409-102-2-22494/Functional%20versus%20Non-functional.pdf

HIPAA security guide – http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/product-applicability-guide-hipaa-hitech-technical-guide.pdf

Still VCAP5-DCD cert guide is a good resource to understand the design framework and it really helps in Exam.

Few tips:

  1. Be fresh and rested as there are 205 minutes, it’s quite a long to sit in front of the screen.
  2. Stay focused and read carefully all the questions and instructions at least twice.
  3. i would suggest to start from the design questions which would take a little bit more time.

 

Create VM Using PowerCli and an XML File

To create a VM from a XML file , first we have to create a xml file with the settings for the VM. I created a very basic on that only specifies the name and size of the vDisk.

<CreateVM>
<VM>
<Name>MyVM1</Name>
<HDDCapacity>1</HDDCapacity>
</VM>
<VM>
<Name>MyVM2</Name>
<HDDCapacity>1</HDDCapacity>
</VM>
</CreateVM>

Lets first read the Content of XML file using :

PowerCLI C:\>[xml]$s = Get-Content c:\avn\myvm.xml

Create the virtual machines. using below power cli simple single line script:

PowerCLI C:\> $s.CreateVM.VM | foreach { New-VM -VMHost esxcomp-01a.corp.local -Name $_.Name -DiskMB $_.HDDCapacity}

Here is the Result of successful creation of VMs…

powercli_xml

You can expand the xml file to include a host, disk types,CPU,memory and other settings……

Custom TCP/IP Stacks

vSphere at 5.1 and earlier, there was a single TCP/IP stack which was being used by all the different types of network traffics. This meant that management, virtual machine (VM) traffic, vMotion, NFC etc.. were all fixed to use the same stack. and because of the shared stack, all VMkernel interfaces had some things in common: they have to have the same default gateway, the same memory heap, and the same ARP and routing tables.

from vSphere 5.5 onward , VMware has changed this functionality, allowing for multiple TCP/IP stacks, but with some limitations. Only certain types of traffic could make use of a stack other than the default one, and the custom TCP stack had to be created from the command line using an ESXCLI command. 

In vSphere 6, VMware went ahead and created separate vMotion and Provisioning stacks by default , and when you deploy NSX , this will create its own stack.

NOTE – Custom TCP/IP stacks aren’t supported for use with fault tolerance logging, management traffic, Virtual SAN traffic, vSphere Replication traffic, or vSphere Replication NFC traffic.

Create a Custom TCP/IP Stack:

To create a new TCP/IP stack we must use the ESXCLI…

>esxcli network ip network add -N=”stack name”

1

2.GIF

Modify Custom TCP/IP Stack Settings:

Now since you have created your custom stack , it needs to be configured with settings like which DNS settings to use, which address to use as the default gateway etc. in advanced settings you can configure , which congestion control algorithm to use, or the maximum number of connections that can be active at any particular time.

To configure these settings, GO to Manage > Networking > TCP/IP configuration and highlight the your stack to be configured.

3.GIF

on the edit page , these settings can be modified:

  • Name. Change Name if Required.
  • DNS Configuration.use DHCP, so the custom TCP/IP stack can pickup the settings from DHCP or  static DNS servers with search domains.
  • Default Gateway. One of the primary reasons for creating a separate TCP/IP stack from the default one and this can be configured here.
  • Congestion Control Algorithm. The algorithm specified here affects vSphere’s response when congestion is suspected on the network stack.
  • Max Number of Connections. The default is set to 11,000.

45.GIF67.GIF

Configuring custom TCP/IP stacks also will help you have separate gateway for each kind of traffic if that’s the way network has been designed.

VMware vSphere Replication 6.5 released with 5 minute RPO

I was going through the release notes of vSphere Replication 6.5 and really surprised to see the RPO of this new release…and thought of worth sharing:

VMware vSphere Replication 6.5 provides the following new features:

  • 5-minute Recovery Point Objective (RPO) support for additional data store types – This version of vSphere Replication extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5. This allows customers to replicate virtual machine workloads with an RPO setting as low as 5-minutes between these various data store options.

surely  a great reason to upgrade to vSphere 6.5 and identify which all your VMs can work with 5 minute RPO and use this free replication feature of vSphere suite.

VM Component Protection (VMCP)

vSphere 6.0 has a powerful  feature as a part of vSphere HA called VM Component Protection (VMCP). VMCP protects virtual machines from storage related events, specifically Permanent Device Loss (PDL) and All Paths Down (APD) incidents.

Permanent Device Loss (PDL)
A PDL event occurs when the storage array issues a SCSI sense code indicating that the device is unavailable. A good example of this is a failed LUN, or an administrator inadvertently removing a WWN from the zone configuration. In the PDL state, the storage array can communicate with the vSphere host and will issue SCSI sense codes to indicate the status of the device. When a PDL state is detected, the host will stop sending I/O requests to the array as it considers the device permanently unavailable, so there is no reason to continuing issuing I/O to the device.

All Paths Down (APD)
If the vSphere host cannot access the storage device, and there is no PDL SCSI code returned from the storage array, then the device is considered to be in an APD state. This is different than a PDL because the host doesn’t have enough information to determine if the device loss is temporary or permanent. The device may return, or it may not. During an APD condition, the host continues to retry I/O commands to the storage device until the period known as the APD Timeout is reached. Once the APD Timeout is reached, the host begins to fast-fail any non-virtual machine I/O to the storage device. This is any I/O initiated by the host such as mounting NFS volumes, but not I/O generated within the virtual machines. The I/O generated within the virtual machine will be indefinitely retried. By default, the APD Timeout value is 140 seconds and can be changed per host using the Misc.APDTimeout advanced setting.

VM Component Protection (VMCP)
vSphere HA can now detect PDL and APD conditions and respond according to the behavior that you configure. The first step is to enable VMCP in your HA configuration. This settings simply informs the vSphere HA agent that you wish to protect your virtual machines from PDL and APD events.

Cluster Settings -> vSphere HA -> Host Hardware Monitoring – VM Component Protection -> Protect Against Storage Connectivity Loss.

1.GIF

The next step is configuring the way you want vSphere HA to respond to PDL and ADP events.  Each type of event can be configured independently.  These settings are found on the same window that VMCP is enabled by expanding the Failure conditions and VM response section.

2

What setting and their explanation are available on this Blog.

Storage Protocol Comparison

Many of my friends was looking for something easy to understand Storage  Protocol Comparison , here is what i had created some time back for my reference:

  iSCSI NFS FIBRE CHANNEL FCOE
Description
iSCSI presents block devices to a VMware® ESXi™ host. Rather than accessing blocks from a local disk, I/O operations are carried out over a network using a block access protocol. In the case of iSCSI, remote blocks are accessed by encapsulating SCSI commands and data into TCP/IP packets.
NFS presents file devices over a network to an ESXi host for mounting. The NFS server/array makes its local file systems available to ESXi hosts. ESXi hosts access the metadata and files on the NFS array/server, using an RPC-based protocol.
Fibre Channel (FC) presents block devices similar to iSCSI. Again, I/O operations are carried out over a network, using a block access protocol. In FC, remote blocks are accessed by encapsulating SCSI commands and data into FC frames. FC is commonly deployed in the majority of mission-critical environments.
Fibre Channel over Ethernet (FCoE) also presents block devices, with I/O operations carried out over a network using a block access protocol. In this protocol, SCSI commands and data are encapsulated into Ethernet frames. FCoE has many of the same characteristics as FC, except that the transport is Ethernet.
Implementation Options
Network adapter with iSCSI capabilities, using software iSCSI initiator and accessed using a VMkernel (vmknic) port.               or:                                                      • Dependent hardware iSCSI initiator.                                          or:                                                         • Independent hardware iSCSI initiator.
Standard network adapter, accessed using a VMkernel port (vmknic).
Requires a dedicated host bus adapter (HBA) (typically two, for redundancy and multipathing).
• Hardware converged network adapter (CNA).                                                 • Network adapter with FCoE capabilities, using software FCoE initiator.
Performance Considerations
iSCSI can run over a 1Gb or a 10Gb TCP/IP network. Multiple connections can be multiplexed into a single session, established between the initiator and target. VMware supports jumbo frames for iSCSI traffic, which can improve performance. Jumbo frames send payloads larger than 1,500.
NFS can run over 1Gb or 10Gb TCP/IP networks. NFS also supports UDP, but the VMware implementation requires TCP. VMware supports jumbo frames for NFS traffic, which can improve performance in certain situations.
FC can run on F71Gb/2Gb/4Gb/8Gb and 16Gb HBAs,This protocol typically affects a host’s CPU the least,
because HBAs (required for FC)
handle most of the processing
(encapsulation of SCSI data into
FC frames).
This protocol requires 10Gb Ethernet. With FCoE, there is no IP encapsulation of the data as there is with NFS and iSCSI. This reduces some of the overhead/latency. FCoE is SCSI over Ethernet, not IP. This protocol also requires jumbo frames, because FC payloads are 2.2K in size and cannot be fragmented.
Error Checking
iSCSI uses TCP, which resends dropped packets.
NFS uses TCP, which resends dropped packets.
FC is implemented as a lossless network. This is achieved by throttling throughput at times of congestion, using B2B and E2E credits.
FCoE requires a lossless network. This is achieved by the implementation of a pause frame mechanism at times of congestion.
Security
iSCSI implements the Challenge Handshake Authentication Protocol (CHAP) to ensure that initiators and targets trust each other. VLANs or private networks are highly recommended, to isolate the iSCSI traffic from other traffic types.
VLANs or private networks are highly recommended, to isolate the NFS traffic from other traffic types
Some FC switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. VSANs are conceptually similar to VLANs. Zoning between hosts and FC targets also offers a degree of isolation.
Some FCoE switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. Zoning between hosts and FCoE targets also offers a degree of isolation
ESXi Boot from SAN
Yes
No
Yes
Software FCoE – No Hardware FCoE (CNA) – Yes
Maximum Device Size
64TB
Refer to NAS array vendor or NAS server vendor for maximum supported datastore size. Theoretical size is much larger than 64TB but requires NAS vendor to support it.
64TB
64TB
Maximum Number of Devices
256
Default: 8 Maximum: 256
256
256
Storage vMotion Support
Yes
Yes
Yes
Yes
Storage DRS Support
Yes
Yes
Yes
Yes
Storage I/O Control Support
Yes
Yes
Yes
Yes
Virtualized MSCS Support
No. VMware does not support MSCS nodes built on virtual machines residing on iSCSI storage.
No. VMware does not support MSCS nodes built on virtual machines residing on NFS storage.
Yes. VMware supports MSCS nodes built on virtual machines residing on FC storage
No. VMware does not support MSCS nodes built on virtual machines residing on FCoE storage.

This has been taken from VMware’s Storage Protocol Comparison White Paper , please check this link for more details.