Assess Your Sovereign Cloud Stack for Compliance

Featured

VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud is a management pack available in the VMware Marketplace. You can download and install this management pack on an instance of vRealize (ARIA) Operations to automatically assess a Sovereign Cloud stack for compliance. 

VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud is intended to be used by the VMware Cloud Service Partners who are part of the Sovereign Cloud Initiative. The following products in the Sovereign Cloud stack are currently supported for compliance assessment:

  • vSphere
  • NSX-T
  • VMware Cloud Director
  • VMware Cloud Director Availability

For every Sovereign Cloud instance, providers need one instance of vRealize (ARIA) Operations with the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud installed and configured. The compliance score card is available in the Optimize > Compliance screen of vRealize (ARIA) Operations.

Compliance Pack for Sovereign Cloud Controls Rules

The compliance rules are based on a checklist that VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud utilizes to monitor the products in the Sovereign Cloud stack. The checklist is based on the Sovereign Cloud Framework which takes into consideration the following key principles:

Data Sovereignty and Jurisdictional Control

Data should reside locally.
The cloud should be managed and governed locally, and all data processing including API calls should happen within the country/geography.
Data should be accessible only to residents of the same country, and the data should not be accessible under foreign laws or from any outside geography.

Data Access and Integrity

Two data center locations.
File, Block, and Object store options
Backup services, Disaster Recovery
Low-latency connectivity, Micro segmentation

Data Security and Compliance

Industry recognized Security Controls (minimum ISO/IEC 27001 or equivalent)
Additional relevant industry or governmental certifications
Third-party audits & Zero Trust Security & Encryption
Catalog of trusted images using the sovereign repository
Support for air gapped zones/regions
Operating personnel requirements and security clearance

Data Independence and Interoperability

Workload migration with bi-directional workload portability
Modern application architecture using containers
Support for hybrid cloud deployments

Control Rules and Product Control Set for vSphere

The vSphere control set is available for version 7, and 6.5/6.7 separately and details can be Here

Control Rules and product control set for nsx-t

Control rules and product control set for NSX-T and details can be found Here. The NSX-T version supported is greater than or equal to 3.2.x.

Controls Rules and Product Control Set for VMware Cloud Director

Control rules and product control set for VMware Cloud Director and details can be found Here. The supported version of the VMware Cloud Directory management pack is 8.10.2.

Control Rules and Product Control Set for VMware Cloud Director Availability

Controls rules and product control set for VMware Cloud Director Availability and details can be found Here. The version of the VMware Cloud Director Availability management pack supported is 1.2.1.

Install the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud

The VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud consists of a PAK file that contains default contains views, reports, alerts and symptoms for the VMware software in the Sovereign Cloud stack.

  • Download the PAK file for VMware vRealize Operations Compliance Pack for Sovereign Cloud from the VMware Marketplace, and save the file to a temporary folder on your local system.
  • Log in to the vRealize (ARIA) Operations user interface with administrator privileges. Installation of this management pack is to be done by VMware Cloud Provider Program Partners.
  • In the left pane of vRealize (ARIA) Operations, click Integrations under Data Sources.
  • In the Repository tab, click the ADD button.
  • The Add Solution dialog box opens , Click BROWSE to locate the temporary folder on your system, and select the PAK file.
  • Read and select the checkboxes if required, Click Upload. The upload might take several minutes
  • Read and accept the EULA, and click Next. Installation details appear in the window during the process.
  • When the installation is completed, click Finish.

in the vROps instance, go to Optimize > Compliance. In the VMware Cloud tab, you can see the VMware Sovereign Cloud Compliance card in the VMware Sovereign Cloud Benchmarks section. Click Enable.

When you click Enable, a list of policies appears. You must select a policy that you want to apply. i am selecting default policy here

With the vRealize (ARIA) Operations reporting functions, you can generate a report to view the compliance status of your Sovereign Cloud. You can download the report in a PDF or CSV file format for future and offline needs.

Accessing Compliance Reports

In the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud two kinds of reports are available:

At a VMware Cloud Provider Program Partner level – This report considers all the infrastructure level resources and generates a report at the org level, showing non-compliance. The compliance data is for the org and associated child hierarchy. Includes compliance details for org, org-vdc, virtual machines, logical switches, and logical routers as per the hierarchy.

From the left menu, click Visualize > Reports and run the report – [vCloud Director] – VMware Sovereign Cloud – Non-Compliance Report.

From the Reports panel, click Generated Reports, To select a generated report from the list, click the vertical ellipsis against the [vCloud Director] – VMware Sovereign Cloud – Non-Compliance Report report and select options such as run and delete.

At a tenant level – The VMware Cloud Provider Program Partner generates a filtered report that the tenant can access. This report is at an org-VDC level. The compliance data is for the child hierarchy only. Includes compliance details for virtual machines, logical switches, and logical routers as per the hierarchy and Tenants can view the generated reports in the VMware Chargeback console by logging in and navigating to Reports > Tenant Reports and clicking the Generated Reports tab.

Custom Benchmarks

In case the CSP partner/customer requires they can create a custom compliance benchmark to ensure that objects comply with compliance alerts available in vRealize (ARIA) Operations, or custom compliance alert definitions. When a compliance alert is triggered on your vCenter instance, hosts, virtual machines, distributed port groups, or distributed switches, you investigate the compliance violation. You can add up to five custom compliance scorecards

This is the first version of the sovereign cloud compliance pack for ARIA Operations for our Cloud Providers brings a comprehensive compliance checklist that encompasses the sovereign framework as a benchmark and continuously validates applications and infrastructure to help partners maintain their sovereignty posture. This brings extended capabilities to ARIA Operations, which, now addition to the capacity, cost, and performance monitoring, will monitor and report compliance drifts to the right stakeholders to better manage their compliance structure.

Advertisement

NFS DataStore on VMware Cloud on AWS using Amazon FSx for NetApp

Featured


Amazon FSx for NetApp ONTAP integration with VMware Cloud on AWS is an AWS-managed external NFS datastore built on NetApp’s ONTAP file system that can be attached to a cluster in your SDDC. It provides customers with flexible, high-performance virtualized storage infrastructure that scales independently of compute resources.

PROCESS

  • Make sure SDDC has been deployed on VMware Cloud on AWS with version 1.20
  • The SDDC is added to an SDDC Group. While creating the SDDC Group, a VMware Managed Transit Gateway (vTGW) is automatically deployed and configured
  • A Multi-AZ file system powered by Amazon FSx for NetApp ONTAP is deployed across two AWS Availability Zones (AZs). (You can also deploy in single AZ but not recommended for production)

DEPLOY VMWARE MANAGED TRANSIT GATEWAY

To use FSx for ONTAP as an external datastore, an SDDC must be a member of an SDDC group so that it can use the group’s vTGW and to configure you must be logged into the VMC console as a user with a VMC service role of Administrator and follow below steps:

  • Log in to the VMC Console and go on the Inventory page, click SDDC Groups
  • On the SDDC Groups tab, click ACTIONS and select Create SDDC Group
  • Give the group a Name and optional Description, then click NEXT
  • On the Membership grid, select the SDDCs to include as group members.The grid displays a list of all SDDCs in your organization. To qualify for membership in the group, an SDDC must meet several criteria:
    • It must be at SDDC version 1.11 or later. Members of a multi-region group must be at SDDC version 1.15 or later.
    • Its management network CIDR block cannot overlap the management CIDR block of any other group member.
    • It cannot be a member of another SDDC Group.
    When you have finished selecting members, click NEXT. You can edit the group later to add or remove members.
  • Acknowledge that you understand and take responsibility for the costs you incur when you create an SDDC group, then click CREATE GROUP to create the SDDC Group and its VMware Transit Connect network.

ATTACH VPC TO VMWARE MANAGED TRANSIT GATEWAY

After the SDDC Group is created, it shows up in your list of SDDC Groups. Select the SDDC Group, and then go to the External VPC tab and click on ADD ACCOUNT button, then provide the AWS account that will be used to provision the FSx file system, and then click Add.

Now it’s time for you to go back to the AWS console and sign in to the same AWS account where you will create Amazon FSx file system. Here navigate to the Resource Access Manager service page and

click on the Accept resource share button.

Next, we need to attach VMC Transit Gateway to the FSX VPC, for that you need to go to:

ATTACH VMWARE MANAGED TRANSIT GATEWAY TO VPC

  • Open the Amazon VPC console and navigate to Transit Gateway Attachments.
  • Choose Create transit gateway attachment
  • For Name tag, optionally enter a name for the transit gateway attachment.
  • For Transit gateway ID, choose the transit gateway for the attachment, make sure you choose a transit gateway that was shared with you.
  • For Attachment type, choose VPC.
  • For VPC ID, choose the VPC to attach to the transit gateway.This VPC must have at least one subnet associated with it.
  • For Subnet IDs, select one subnet for each Availability Zone to be used by the transit gateway to route traffic. You must select at least one subnet. You can select only one subnet per Availability Zone.
  • Choose Create transit gateway attachment.

Accept the Transit Gateway attachment as follows:

  • Navigating back to the SDDC Group, External VPC tab, select the AWS account ID used for creating your FSx NetApp ONTAP, and click Accept. This process takes some time..
  • Next, you need to add the routes so that the SDDC can see the FSx file system. This is done on the same External VPC tab, where you will find a table with the VPC. In that table, there is a button called Add Routes. In the Add Route section, add the CIDR of your VPC where the FSX will be deployed.

In the AWS console, create the route back to the SDDC by locating VPC on the VPC service page and navigating to the Route Table as seen below.

also ensure that you have the correct inbound rules for the SDDC Group CIDR to allow the inbound rules for SDDC Group CIDR. it this case i am using entire SDDC CIDR, Further to this Security Group, the ENI Security Group also needs the NFS port ranges adding as inbound and outbound rules to allow communication between VMware Cloud on AWS and the FSx service.

Deploy FSx for NetApp ONTAP file system in your AWS account

Next step is to create an FSx for NetApp ONTAP file system in your AWS account. To connect FSx to VMware cloud on AWS SDDC, we have two options:

  • Either create a new Amazon VPC under the same connected AWS account and connect it using VMware Transit Connect.
  • or Create a new AWS account in the same region as well as VPC, connect it using VMware Transit Connect.

In this blog, i am deploying in the same connected VPC and for it to deploy, Go to Amazon FSx service page, click on Create File System and on the Select file system type page, select Amazon FSx for NetApp ONTAP,

On Next page, select the Standard create method and enter require details like:

  • Select Deployment type (Multi-AZ) and Storage capacity
  • Select correct VPC, Security group and Subnet

After the file system is created, check the NFS IP address under the Storage virtual machines tab. The NFS IP address is the floating IP that is used to manage access between file system nodes, and this IP we will use to configuring in VMware Transit Connect to allow access volume from SDDC.

we are done with creating the FSx for NetApp ONTAP file system.

MOUNT NFS EXTERNAL STORAGE TO SDDC Cluster

Now it’s time for you to go back to the VMware Cloud on AWS console and open the Storage tab of your SDDC. Click ATTACH DATASTORE and fill in the required values.

  • Select a cluster. Cluster-1 is preselected if there are no other clusters.
  • Choose Attach a new datastore
  • The NFS IP address shown in the Endpoints section of the FSx Storage Virtual Machine tab. Click VALIDATE to validate the address and retrieve the list of mount points (NFS exports) from the server.

  • Pick one from the list of mount points exported by the server at the NFS server address. Each mount point must be added as a separate datastore
  • AWS FSx ONTAP
  • Give the datastore a name. Datastore names must be unique within an SDDC.
    • Click on ATTACH DATASTORE

VMware Cloud on AWS supports external storage starting with SDDC version 1.20. To request an upgrade to an existing SDDC, please contact VMware support or notify your Customer Success Manager.

AI/ML with VMware Cloud Director

Featured

AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries.

Why is AI/ML important?

it’s no secret that data is an increasingly important business asset, with the amount of data generated and stored globally growing at an exponential rate. Of course, collecting data is pointless if you don’t do anything with it, but these enormous floods of data are simply unmanageable without automated systems to help.

Artificial intelligence, machine learning and deep learning give organizations a way to extract value out of the troves of data they collect, delivering business insights, automating tasks and advancing system capabilities. AI/ML has the potential to transform all aspects of a business by helping them achieve measurable outcomes including:

  • Increasing customer satisfaction
  • Offering differentiated digital services
  • Optimizing existing business services
  • Automating business operations
  • Increasing revenue
  • Reducing costs

As modern applications become more prolific, Cloud Providers need to address the increasing customer demand for accelerated computing that typically requires large volumes of multiple, simultaneous computation that can be met with GPU capability.

Cloud Providers can now leverage vSphere support for NVIDIA GPUs and NVIDIA AI Enterprise (a cloud-native software suite for the development and deployment of AI and has been optimized and certified for VMware vSphere), This enables vSphere capabilities like vMotion from within Cloud Director to now deliver multi-tenancy GPU services which are key to maximizing GPU resource utilization. With Cloud Director support for the NVIDIA AI Enterprise software suite, customers now have access to best-in-class, GPU optimized AI frameworks and tools and to deliver compute intensive workloads including artificial intelligence (AI) or machine learning (ML) applications within their datacenters.

This solution with NVIDIA takes advantage of NVIDIA MIG (Multi-instance GPU) which supports spatial segmentation between workloads at the physical level inside a single device and is a big deal for multi-tenant environments driving better optimization of hardware and increased margins. Cloud Director is reliant on host pre-configuration for GPU services included in NVIDIA AI Enterprise which contains vGPU technology to enable deployment/configuration on hosts and GPU profiles.

Customers can self serve, manage and monitor their GPU accelerated hosts and virtual machines within Cloud Director. Cloud Providers are able to monitor (through vCloud API and UI dashboard) NVIDIA vGPU allocation, usage per VDC and per VM to optimize utilization and meter/bill (through vCloud API) NVIDIA vGPU usage averaged over a unit of time per tenant for tenant billing.

Provider Workflow

  • Add GPU devices to ESXi hosts in vCenter and install required drivers. 
  • Verify vGPU profiles are visible by going in to vCD provider portal → Resources → Infrastructure Resources → vGPU Profiles
  • Edit vGPU profiles to provide necessary tenant facing instructions and a tenant facing name to each vGPU profile. (Optional)
  • Create a PVDC backed by one or more clusters having GPU hosts in vCenter.
  • In provider portal → Cloud Resources → vGPU Policies → Create a new vGPU policy by following the wizards steps.

Tenant Workflow

When you create a vGPU policy, it is not visible to tenants. You can publish a vGPU policy to an organization VDC to make it available to tenants.

Publishing a vGPU policy to an organization VDC makes the policy visible to tenants. The tenant can select the policy when they create a new standalone VM or a VM from a template, edit a VM, add a VM to a vApp, and create a vApp from a vApp template. You cannot delete a vGPU policy that is available to tenants.

  • Publish the vGPU policy to one or more tenant VDCs similar to the way we publish sizing and placement policies.
  • Create a new VM or instantiate a VM from template. In vGPU enabled VDCs, tenants can now select a vGPU policy

Cloud Director not only allows for VMs but providers can also leverage cloud director’s Container Service Extension to offer GPU enabled Tanzu Kubernetes Clusters.

Step-by-Step Configuration

Below video covers step-by-step process of configuring provider and tenant side of configuration as well as deploying Tensor flow GPU in to a VM.

Tanzu on Azure Native

Featured

VMware Tanzu Kubernetes Grid provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate that is ready for end-user workloads and ecosystem integrations. You can deploy Tanzu Kubernetes Grid across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. in this blog post we will deploy Tanzu Kubernetes Grid 1.4 to Azure Native VMs.

Pre-requisite

  • Deploy a Client VM ,my case ubuntu VM, this VM will be the bootstrap VM for Tanzu, from where we will deploy management cluster in Azure, you can have one native Azure VM for this.
  • TKG uses a local Docker install to setup a small, ephemeral, temporary kind based Kubernetes cluster to build the TKG management cluster in Azure. you need Docker locally to run the kind cluster.
  • Download and Unpack the “Tanzu CLI” and “Kubectl” from HERE, on above VM in a new directory named "tkg" or "tanzu"
    • unpack using #> tar -xvf tanzu-cli-bundle-v1.4.0-linux-amd64.tar
    • After you unpack the bundle file, in your folder, you will see a cli folder with multiple subfolders and files
    • Install Tanzu CLI using #> sudo install core/v1.4.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
  • Unpack the kubectl binary using:
    • #> tar -xvf kubectl-linux-v1.21.2+vmware.1.gz
    • Install kubectl using #> sudo install kubectl-linux-v1.21.2+vmware.1 /usr/local/bin/kubectl
  • Run the following command from the tanzu directory to install all the Tanzu plugins:
    • #> tanzu plugin install –local cli all
    • #> tanzu plugin list

Configure Azure resources

In this section we will prepare Microsoft Azure for running Tanzu Kubernetes Grid, For the networking, i have prepared azure network as below:

  • Every Tanzu Kubernetes Grid cluster requires 2 Public IP addresses
  •  For each Kubernetes Service object with type LoadBalancer, 1 Public IP address is required.
  • A VNET with: (only required if using existing VNET else TKG can create all automatically)
    • A subnet for the management cluster control plane node
    • A Network Security Group on the control plane subnet with Allow TCP over port 22 and 6443 for any source and destination inbound security rules, to enable SSH and Kubernetes API server connections
    • One additional subnet and Network Security Group for the management cluster worker nodes.
  • Below is the high level network topology will look like when we deploy Tanzu Management Cluster:

Get Tenant ID

Make a note of the Tenant ID value as it will be used later by hovering over your account name at upper-right, or else browse to Azure Active Directory > <Your Org> > Properties > Tenant ID

Register Tanzu Kubernetes Grid as an Azure Client App & Get Application (Client) ID

Tanzu Kubernetes Grid manages Azure resources as a registered client application that accesses Azure through a service principal account. Below steps register your Tanzu Kubernetes Grid application with Azure Active Directory, create its account, create a client secret for authenticating communications, and record information needed later to deploy a management cluster.

  • Go to Active Directory > App registrations and click on + New registration.
  • Enter name and select who else can use it. leave Redirect URI (optional) field blank.
  • Click Register. This registers the application with an Azure service principal account

Make a note of the Application (client) ID value, we will use it later.

Get Subscription ID

From the Azure Portal top level, browse to Subscriptions. At the bottom of the pane, select one of the subscriptions you have access to, and make a note of Subscription ID.

Add a Role, Create and Record Secret ID

Click the subscription listing to open its overview pane and Select to Access control (IAM) and click Add a role assignment.

  • In the Add role assignment pane
    • Select the Owner role
    • Select to “user, group, or service principal
    • Under Select enter the name of your app, in my case “avnish-tkg”. It appears under Selected Members
  • Click Save. A popup appears confirming that your app was added as an owner for your subscription. You can also verify by going in to “Owned Application” section.
  • On the Azure Portal go to Azure Active Directory, click on App Registrations, select your “avnish-tkg” app under Owned applications.
  • Go to Certificates & secrets then in Client secrets click on New client secret.
  • In the Add a client secret popup, enter a Description, choose an expiration period, and click Add.
  • Azure lists the new secret with its generated value under Client Secrets. take a note of “Client Secret” value, which we will use later.

With All above steps, we will have four recorded values:

Subscription ID – XXXXXXX-XXX-4853-9cff-3d2d25758b70
Application Client ID – XXXXXXXX-6134-xxxx-b1a9-8fcbfd3ea189
Secret Value – XXXXX-xxxxxxxxxxxxxxxkB3VdcBF.c_C.
Tenant ID – XXXXXXXX-3cee-4b4a-a4d6-xxxxxdd62f0

we will use these values when we will create management cluster.

Create an SSH Key-Pair

To connect to management azure machine we must need to provide the public key part of an SSH key pair. you can use a tool "ssh-keygen" to generate one

  • #>ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default
  • Enter and repeat a password for the key pair

Copy the content of .ssh/id_rsa.pub, which we will use in next section.

Accept Base VM image license

To run management cluster VMs on Azure, accept the license for their base Kubernetes version and machine OS by logging to azure cell and run below commands:

#> az login --service-principal --username AZURE_CLIENT_ID --password AZURE_CLIENT_SECRET --tenant AZURE_TENANT_ID

AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID are your avnish-tkg app's client ID and secret and your tenant ID, as recorded above

#> az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot21dot2-ubuntu-2004 --subscription AZURE_SUBSCRIPTION_ID

In Tanzu Kubernetes Grid v1.4.0, the default cluster image --plan value is k8s-1dot21dot2-ubuntu-2004.

Start the Installer Interface

On the machine on which you downloaded and installed the Tanzu CLI, run the

#> tanzu management-cluster create -b "IP of this machine:port" -u

b = binding with interface
u = for User Interface

On the local machine, browse to the above machine’s IP address to access the installer interface and then choose “Microsoft Azure” and click on “DEPLOY”

In the IaaS Provider section, enter the Tenant IDClient IDClient Secret, and Subscription ID values for your Azure account. we recorded these values in above pre-requisite section.

  • Click Connect. The installer verifies the connection and changes the button label to Connected.
  • Select the Azure region in which to deploy the management cluster.
  • Paste the contents of your SSH public key, ".ssh/id_rsa.pub“, into the text box.
  • Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.

In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button, in my case i am using existing VNET and Subnets which i had provisioned already.

To make the management cluster private, enable the Private Azure Cluster checkbox or leave it Untick. By default, Azure management and workload clusters are public. But you can also configure them to be private, which means their API server uses an Azure internal load balancer (ILB) and is therefore only accessible from within the cluster’s own VNET or peered VNETs.

In Management Cluster Settings section choose Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.

Under Worker Node Instance Type, select the configuration for the worker node VM

In the optional Metadata section, optionally provide descriptive information about this management cluster.

in Kubernetes Network section check the default Cluster Service CIDR and Cluster Pod CIDR ranges. If these CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are not available, change the values under Cluster Service CIDR and Cluster Pod CIDR.

In Identity Management section, enable/disable Enable Identity Management Settings based on your use case.

In OS Image section, select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next

In the Registration URL field, copy and paste the registration URL you obtained from Tanzu Mission Control or if you don’t have TMC access or URL, move next you can register it later if you want.

In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program and then Click Review Configuration to see the details of the management cluster that we have configured.

and finally click on Deploy Management Cluster.

Deployment of the management cluster can take several minutes, in my case around 12 minutes.

and finally, once everything is deployed and configured. your “Management Cluster Created”

The installer saves the configuration of the management cluster to ~/.config/tanzu/tkg/clusterconfigs with a generated filename of the form UNIQUE-ID.yaml. After the deployment has completed, you can rename the configuration file to something memorable, 

if you go to azure portal, you should see three control plane VMs, for with names similar to CLUSTER-NAME-control-plane-abcde, one or three worker node VMs with name similar to CLUSTER-NAME-md-0-rh7xv, Disk and Network Interface resources for the control plane and worker node VMs, with names based on the same name patterns.

At this point we can see management cluster deployed successfully, now we can go ahead and create workload clusters based on your requirement easily. Tanzu allows to deploy and manage Kubernetes cluster on vSphere, on VMware on public clouds and native public cloud AWS and Azure native, in next post i will deploy Tanzu on AWS.

Code to Container with Tanzu Build Service

Featured

Tanzu Build Service uses the open-source Cloud Native Buildpacks project to turn application source code into container images. Build Service executes reproducible builds that align with modern container standards, and additionally keeps images up-to-date. It does so by leveraging Kubernetes infrastructure with kpack, a Cloud Native Buildpacks Platform, to orchestrate the image lifecycle. Tanzu Build Service helps customers develop and automate containerized software workflows securely and at scale.

In this post Tanzu Build Service will monitor git branch and automatically build containers with every push. Then it will upload that container to your image registry for you to pull down and run locally or on Kubernetes cluster

Tanzu Build Service Installation Pre-requisite

  • Before we install Tanzu Build Service, you must ensure we have a Kubernetes cluster and ensure that all worker nodes have at least 50 GB of ephemeral storage allocated to them and also ensure your Kubernetes cluster is configured with default StorageClass

To do this on Tanzu Kubernetes Grid Service, mount a 60GB volume at /var/lib to the worker nodes in the TanzuKubernetesCluster resource that corresponds to your TKGs cluster. I have used below yaml content for mounting volumes while creating Tanzu Kubernetes cluster.

storage:
      classes:
      - tanzu01
      - tkgontkgs
      defaultClass: tkgontkgs
  topology:
    controlPlane:
      class: best-effort-medium
      count: 1
      storageClass: tkgontkgs
      volumes:
      - capacity:
          storage: 60Gi
        mountPath: /var/lib

    workers:
      class: best-effort-medium
      count: 3
      storageClass: tkgontkgs
      volumes:
      - capacity:
          storage: 60Gi
        mountPath: /var/lib
  • Ensure you have access to an existing container registry or install one using this guidance, this will be used to install Tanzu Build Service and store the application images that will be created by build process.
  • Also on a client VM from where you will connect to your Kubernetes cluster, install “docker cli” as well as install below tools , if you need guidance to install these tools, check Here
  • Install kapp, this is a deployment tool that allows users to manage Kubernetes resources in bulk.
  • Install ytt, this is a templating tool that understands YAML structure.
# wget -O ytt https://github.com/vmware-tanzu/carvel-ytt/releases/download/ v0.35.1/ytt-linux-amd64
#chmod +x ytt
#mv ytt /usr/local/bin/ytt
  • Install kbld,this is tool that builds, pushes, and relocates container images.
#wget -O kbld https://github.com/vmware-tanzu/carvel-kbld/releases/downloa d/v0.30.0/kbld-linux-amd64
#mv kbld /usr/local/bin/kbld
#chmod +x /usr/local/bin/kbld
  • Install kp, it controls the kpack installation on Kubernetes, download the kp CLI for your operating system from the Tanzu Build Service page on Tanzu Network.
#mv kp-linux-0.3.1 /usr/local/bin/kp
#chmod +x /usr/local/bin/kp
  • Install imgpkg, it is a tool that relocates container images and pulls the release configuration files.
#wget -O imgpkg https://github.com/vmware-tanzu/carvel-imgpkg/releases/dow nload/v0.17.0/imgpkg-linux-amd64
#mv imgpkg /usr/local/bin/imgpkg
#chmod +x /usr/local/bin/imgpkg
  • and finally target the Kubernetes Cluster on which you want to install “Tanzu Build Service” using :
#kubectl config use-context <context-name> 

Relocate Images to private Registry

First we need to relocate images from the Tanzu Network registry to an internal image registry and for that login to the image registry where you want to store the images by running:

#>docker login harbor.tanzu.zpod.io - 

<harbor.tanzu.zpod.io> - this is my private registry host name 

Now login to the Tanzu Network registry with your Tanzu Network credentials:

#>docker login registry.pivotal.io

Now lets relocate the images to your local registry using “imgpkg” command:

#>imgpkg copy -b "registry.pivotal.io/build-service/bundle:1.2.2" --to-repo harbor.tanzu.zpod.io/tbs/build-service

This completes image relocation process, now lets move to installation.

Tanzu Build Service Installation

Pull the Tanzu Build Service bundle image on your client vm from your internal registry using imgpkg:

#>imgpkg pull -b  "harbor.tanzu.zpod.io/library/build-service:1.2.2" -o /tmp/bundle

Use the Carvel tools kappytt, and kbld, (those we installed in pre-requisite section) to install Build Service and define the required Build Service parameters by running:

#>ytt -f /tmp/bundle/values.yaml -f /tmp/bundle/config/ -f /tmp/ca.crt -v docker_repository='harbor.tanzu.zpod.io/tbs/build-service'     -v docker_username='admin' -v docker_password='<password>' -v tanzunet_username='tripathiavni@vmware.com' -v tanzunet_password='<password>'| kbld -f /tmp/bundle/.imgpkg/images.yml -f- | kapp deploy -a tanzu-build-service -f- -y




#/tmp/ca.crt:-path to registry CA certificate
#docker_repository:-image repository where TBS images exist
 

/tmp/ca.crt – is my CA certificate of the registry.if all above steps are correct, then you should see the “succeeded” as below.

that means, TBS installation is complete, lets move to next

Import Tanzu Build Service Dependency Resources

The Tanzu Build Service Dependencies like stacks, Buildpacks, Builders, etc. are used to build applications and keep them patched and These must be imported with the kp cli and the Dependency Descriptor (descriptor-<version>.yaml) file from the Tanzu Build Service Dependencies page

and now run below command to import all the dependencies

#>kp import -f /tmp/descriptor-100.0.146.yaml --registry-ca-cert-path /tmp/CA.cer

Verify Installation

To verify Build Service installation, lets target the Kubernetes Cluster where Tanzu Build Service has been installed on and run “kp” command which we installed as part of pre-req.

List the cluster builders available in your installation using:

#> kp clusterbuilder list

you should see output like below.

Few Additional commands, you can also run

#> kp clusterstack list
#> kp clusterstore list

This completes the installation of Tanzu build Service.

Build and Deploy a Sample App

First lets create a “secret” for “gitlab” as i have installed gitlab in to my vSphere Lab environment

#> kp secret create github-creds --git http://10.96.63.48 --git-user demoadmin -n tbs-demo

Lets also create secret for the private registry, in my case my registry is hosted on “harbor.tanzu.zpod.io” in vSphere environment

#> kp secret create my-registry-creds --registry harbor.tanzu.zpod.io --registry-user admin --namespace tbs-demo

With the next command, we are telling Tanzu Build Service where to retrieve the source code. Tanzu Build Service will be configured to watch the master branch by default, but you can configure it to watch your own development branch for whatever feature or bug you happen to be working on. Finally, the tag is where the image will be pushed in your registry. Lets create an image using source code from my git repository by running:

#>kp image create spring-petclinic --tag harbor.tanzu.zpod.io/library/sp                                                    ring-petclinic:latest --namespace tbs-demo --git http://10.96.63.48/demoadmin/wwcp.get

it will download its dependencies and start building image

Once completed, Tanzu Build Service will put a copy of the image into Harbor registry, as well as onto your local Kubernetes cluster within the default namespace.

You can check the image status by running below command.

We can now deploy this image either on local or Kubernetes environment or you can setup continuous deployment to deploy build images on any Kubernetes platform. in below example I am installing on my Tanzu Kubernetes cluster from the private registry where build service pushed the container image.

As you can see once image is deployed, i can access it easily from my browser successfully.

This completes the Step-by-Step procedure to install and use Tanzu Build Service. If you would like to dive deeper into VMware Tanzu Build Service, check out the documentation section.

Deploy Tanzu Kubernetes Clusters using Tanzu Mission Control

Featured

VMware Tanzu Mission Control is a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across multiple teams and clouds.

TMC is Available through VMware Cloud services, Tanzu Mission Control provides operators with a single control point to give developers the independence they need to drive business forward, while ensuring consistent management and operations across environments for increased security and governance.

Use Tanzu Mission Control to manage your entire Kubernetes footprint, regardless of where your clusters reside.

Getting Started with Tanzu Mission Control

To get Started with Tanzu Mission Control, use VMware Cloud Services tools to gain access to VMware Tanzu Mission Control

Launch the TMC Console – Log in to the Tanzu Mission Control console to start managing clusters

Create a Cluster Group – Create a cluster group to help organize and manage your clusters

Register TKGs Management Cluster – When customer registers a Tanzu Kubernetes Grids management cluster, you can bring all of its workload clusters under the management of Tanzu Mission Control, which helps customer to facilitate consistent management using all of the capabilities of Tanzu Mission Control, as well as provisioning resources and creating new clusters directly from Tanzu Mission Control.

Once customer has access to Tanzu Mission Control and created cluster group and registered management cluster, follow below video to deploy Kubernetes clusters on vCenter using management cluster. Video has step-by-Step instruction to help customers for their TMC journey.

Tanzu mission control help customers to bring all the K8s clusters together and once together you can manage policies and configuration of these clusters and help developers to self services.

Now Tanzu Mission Control is available as VMware Cloud Provider Partners as a Software-as-a-Service (SaaS) offering through the VMware Cloud Partner Navigator, this unlocks new opportunities for cloud providers to offer Kubernetes (K8s) managed services for a multi-cloud and multi-team environment. For more details check HERE or reach out to your Business Development Manager.

Deploy Harbor Registry on TKG Clusters

Featured

Tanzu Kubernetes Grid Service, informally known as TKGS, lets you create and operate Tanzu Kubernetes clusters natively in vSphere with Tanzu. You use the Kubernetes CLI to invoke the Tanzu Kubernetes Grid Service and provision and manage Tanzu Kubernetes clusters. The Kubernetes clusters provisioned by the service are fully conformant, so you can deploy all types of Kubernetes workloads you would expect. vSphere with Tanzu leverages many reliable vSphere features to improve the Kubernetes experience, including vCenter SSO, the Content Library for Kubernetes software distributions, vSphere networking, vSphere storage, vSphere HA and DRS, and vSphere security.

Harbor is an open source, trusted, cloud native container registry that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity control and management. so lets go ahead and deploy harbor.I have already provisioned an TKG cluster and you can login to TKG cluster by using below command:

#kubectl vsphere login --server=<supervisor-cluster-ip> --tanzu-kubernetes-cluster=<namespace-name> --tanzu-kubernetes-cluster-name=<cluster-name>

Set the correct context as you might have many clusters by using below command:

#kubectl config use-context <cluster-name01>

Add Harbor Helm repository

Now lets install Harbor, you can use various Helm repositories.

Harbor –  https://github.com/goharbor/harbor-helm or also the one from

Bitnami –  https://github.com/bitnami/charts/tree/master/bitnami/harbor which I’m going to use.

Add the repository of your choice to your client…

#helm repo add harbor https://helm.goharbor.io
#helm repo add bitnami https://charts.bitnami.com/bitnami

…and update Helm subsequently.

#helm repo update

Installing Harbor

We will deploy Harbor in a new Kubernetes Namespace which we will name “tanzu-system-registry”. Create the Namespace with kubectl create ns harbor and start the deployment process by executing the following helm command with some corresponding options:

helm install harbor bitnami/harbor \
--set harborAdminPassword=admin \
--set global.storageClass=tkgontkgs \
--set service.type=LoadBalancer \
--set externalURL=harbor.tanzu.zpod.io \
--set service.tls.commonName=harbor.tanzu.zpod.io \
-n tanzu-system-registry

Go and check the pods status by using this command:

#kubectl get pods -n tanzu-system-registry

lets check the services running inside “tanzu-system-registry” namespace, this will give us external IP of the service.

#kubectl get svc -n tanzu-system-registry

Above command will give us an “External IP” which got auto configured in NSX-T, Lets browse using external IP using user name as “admin” and password which we set in the helm command

Now we can successfully browse and access the registry successfully

You can push images to the Harbor registry to make them available to all clusters that are running in the Tanzu Kubernetes Grid instance. for me this i have deployed for my “Tanzu Build Service” installation as TBS needs registry as pre-requisite.

Integrate Azure Files with Azure VMware Solution

Featured

Azure VMware Solution is a VMware validated solution with on-going validation and testing of enhancements and upgrades. Microsoft manages and maintains private cloud infrastructure and software. It allows customers to focus on developing and running workloads in your private clouds.

In this blog post I will be configuring Virtual Machines running on VMware Azure Solution can access Azure files over azure private end point. This is a end to end four step process describe as below:

and explained in this video:

Here is Step-by-Step process of configuring and accessing Azure Files on Azure VMware Solution:

Step -01 Deploy Azure VMware SDDC

Azure VMware Solution provides customers a private clouds that contain vSphere clusters, built on dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, and additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. customers can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds.

In this blog, i am not going to cover AVS deployment process as this blog post is more focused on Azure Files integration, you can follow below process for the deployment of VMware Azure solution or check official documentation Here

Step -02 Create ExpressRoute to Connect to Azure Native Services

In this section we need to defined whether to use an existing or new ExpressRoute virtual network gateway and follow this decision tree for your AVS to Azure Native Services configuration.

Diagram showing the workflow for connecting Azure Virtual Network to ExpressRoute in Azure VMware Solution.

For this blog post, I will create a new vNET and new ExpressRoute and attach that vNET to Azure VMware Solution, so first thing first, Deploy your Azure VMware Solution and once done, go ahead and create an Azure Virtual Network and Virtual Network Gateway

Create Azure virtual network
  • On the Virtual Network page, select Create to set up your virtual network for your private cloud.
  • On the Create Virtual Network page, enter the details for your virtual network.
  • On the Basics tab, enter a name for the virtual network, select the appropriate region, and select Next
  • IP Addresses. (NOTE:-You must use an address space that does not overlap with the address space you used when you created your private cloud), Select + Add subnet, and on the Add subnet page, give the subnet a name and appropriate address range. When complete, select Add.
  • Select Review + create.
create AN virtual network gateway

Now that we have created a virtual network, we will now create a virtual network gateway.On the Virtual Network gateway page, select Create. On the Basics tab of the Create virtual network gateway page, provide values for the fields, and then select Review + create.

SubscriptionPre-populated value with the Subscription to which the resource group belongs.
Resource groupPre-populated value for the current resource group. Value should be the resource group you created in a previous test.
NameEnter a unique name for the virtual network gateway.
RegionSelect the geographical location of the virtual network gateway as AVS
Gateway typeSelect ExpressRoute.
SKULeave the default value: standard.
Virtual networkSelect the virtual network you created previously. If you don’t see the virtual network, make sure the gateway’s region matches the region of your virtual network.
Gateway subnet address rangeThis value is populated when you select the virtual network. Don’t change the default value.
Public IP addressSelect Create new.
Connect ExpressRoute to the virtual network gateway

Let’s go on the Azure portal, navigate to the Azure VMware Solution private cloud and select Manage > Connectivity > ExpressRoute and then select + Request an authorization key.

Provide a name for it and select Create. It may take about 30 seconds to create the key. Once created, the new key appears in the list of authorization keys for the private cloud.

Copy the authorization key and ExpressRoute ID. we will need them to complete the peering, now navigate to the virtual network gateway and select Connections > + Add

On the Add connection page, provide values for the fields, and select OK.

FieldValue
NameEnter a name for the connection.
Connection typeSelect ExpressRoute.
Redeem authorizationEnsure this box is selected
Virtual network gatewayThe virtual network gateway that we deployed above
Authorization keyPaste the authorization key copied in previous step
Peer circuit URIPaste the ExpressRoute ID copied in previous step

The connection between your ExpressRoute circuit and your Virtual Network is created successfully.

To test connectivity,I have deployed a VM on Azure VMware Solution and one VM in Native Azure and I can reach to both the VMs. I took console of AVS VM and can RDP to Azure Native VM and ping from Azure Native VM to VM deployed in AVS, this ensures that now we have successfully established connectivity between AVS and Azure Native.

Step -03 Create Storage and File Shares

Now lets move to Step -03, Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares.

Once you are on Azure portal, Click on “Create” under Storage account and create an storage account in the same region where we have Azure VMware Solution deployed. We will use this storage account to configure “Azure Files” over private link.

Once Storage Account is created, Lets get in to the networking section of storage account, which allows you to configure networking options. In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or more private endpoints.

A private endpoint is an endpoint that is only accessible within an Azure virtual network and by AVS Network. When you create a private endpoint for your storage account, your storage account gets a private IP address from within the address space of your virtual network, much like how an on-premises file server or NAS device receives an IP address within the dedicated address space of your on-premises network.

Let’s create a “Private EndPoint by clicking on “+ Private endpoint”

Enter basic information as well choose “Region” should be in same region where your AVS has been deployed.

On the Next screen ensure to choose “Target sub-resource” – “file”

Select Azure Virtual Network and subnet that we created in step -02 and click on create

Once in the storage account, select the File shares and click on “+ File share”. The new file share blade should appear on the screen. Complete the fields in the new file share blade to create a file share:

  • Name: the name of the file share to be created.
  • Quota: the quota of the file share for standard file shares; the provisioned size of the file share for premium file shares.
  • Tiers: the selected tier for a file share. 

Now share is created, to Mount this share, Select File shares, which we need to mount and then click on “Connect

Select the drive letter to mount the share to ,choose Authentication method and Copy the provided script.

Step -04 Access Files over SMB

On the windows server which is running on Azure VMware Solution, Paste the script into a shell on the host you’d like to mount the file share to, and run it.

This should mount the “Azure File” to your windows server as Z: drive, which you can use to transfer/store any data that you want to transfer/store.

In case if you are facing issue while accessing file share using Host DNS Name, take private IP of the share connection by clicking on “Network Interface” and copy the private IP

Add this private IP Address in to windows servers Hosts file, then it should work as expected.

This completes integration of Azure VMware Solution to Azure Files (which is azure native service) over the private link, similarly Customers can use many more services of Azure Native those can be easily integrated with Azure VMware Solution.

vSphere Tanzu with AVI Load Balancer

Featured

With the release of the vSphere 7.0 Update 2, VMware now adds new Load Balancer option for vSphere with Tanzu which provides production-ready load balancer option for your vSphere with Tanzu deployments. This Load Balancer is called NSX Advanced Load Balancer, or NSX ALB or AVI Load Balancer, This will provide Virtual IP addresses for the Supervisor Control Plane API server, the TKG guest cluster API servers and any Kubernetes applications that require a service of type Load Balancer. In this post, I will go through a step-by-step deployment of the new NSX ALB along with vSphere with Tanzu.

VLAN & IP address Planning

There are many way to plan IP, in this Lab I will place management ,VIPs and workload nodes on three different networks. For this deployment , I will be using three VLANs, One for Tanzu Management, One for Frontend or VIP and one for Supervisor cluster and TKG clusters , here is my IP Planning sheet:

Deploying & Configuring NSX ALB (AVI)

Now Lets deploy NSX ALB controller (AVI LB) by following very similar process that we follow to deploy any other OVA and I will be assigning NSX ALB IP management address from management network address range. The NSX ALB is available as an OVA. In this deployment, I am using version 20.1.4.The only information required at deployment time are

  • A static IP Address
  • A subnet mask
  • A default gateway
  • An sysadmin login authentication key

I have deployed one controller appliance for this Lab but if you are doing production deployment , it is recommended to create three node controller cluster for high availability and better performance.

Once OVA deployment completes, power on the VM and wait for some time before you browse NSX ALB url using the IP address provided while deployment and login to Controller and then:

  • Enter DNS Server Details and Backup Passphrase
  • Add NTP Server IP address
  • Provide Email/SMTP details ( not mandatory)

Next Choose VMware vCenter as your “Orchestrator Integration”, This creates a new cloud configuration in the NSX ALB called as Default-Cloud. Enter below details on next screen:

  • Insert IP of your vCenter,
  • vCenter Credential
  • Permission – Write Permission
  • SDN Integration – None
  • Select appropriate vCenter “Data Center”
  • For Default Network IP Address Management – Static

On next screen, we define the IP address pool for the Service Engines.

  • Select Management Network (on this Network “management interface” of “service engine” will get connected)
  • Enter IP Subnet
  • Enter Free IP’s in to IP Address Pool section
  • Enter Default Gateway

Select No for configuring multiple Tenants. Now we’re ready to get into the NSX ALB configuration.

Create IPAM Profile

IPAM will be used to assign VIPs to Virtual Services, Kubernetes control planes and applications running inside pods. to create IPAM Go to: Templates -> Profiles -> IPAM/DNS Profiles

  • Assign Name to the Profile , This IPAM will be for “frontend” Network
  • Select Type – “Avi Vantage IPAM
  • Cloud for Usable Network – Choose “Default-Cloud
  • Usable Network – Choose Port group, in my case “frontend” ( all vCenter port groups will get populated automatically by vCenter discovery)

Create and Configure DNS profile as below: ( This is optional)

Go to “Infrastructure” and click on “Cloud” and edit “Default Cloud” and update IPAM Profile and DNS profiles with the IPAM profile and DNS profile that we created above.

 Configure the VIP Network

On NSX ALB console , go to “Infrastructure” and then “Networks” , this will display all the network discovered by NSX ALB. Select “frontend” network and Click on Edit

  • Click on “Add Subnet
  • Enter subnet , in my case – 192.168.117.0/24
  • Click on Static IP Address pool:
  • Ensure “Use Static IP Address for VIPs and SE” is selected
    • and enter IP Segment Pool , in my case 192.168.117.100-192.168.117.200
    • Click on Save

Create New Controller Certificate

Default AVI certificate doesn’t contain IP SAN and can’t be used by vCenter/Tanzu to connect to AVI, so we need to create a custom controller and use it during Tanzu management plane deployment. let’s create controller certificate by going to Templates -> Security -> SSL/TLS Certificates -> Create -> Controller Certificate

Complete the Page with required information and make sure “Subject Alernative Name (SAN)” is NSX ALB controller IP/Cluster IP or hostname.

Then go to Administration -> Settings -> Access Settings and edit System Access Settings:

Delete all the certificates in SSL/TLS certificate filed and choose the certificate that we created in above section.

Go to Template->Security->SSL/TLS Certificates, Copy the certificate we created to use while enabling Tanzu Management plane

Configure Routing

Since the workload network (192.168.116.0/24) is on a different subnet from the VIP network (192.168.170.0/24), we need to add a static route in NSX ALB controller, Go the Infrastructure page, navigate to Routing and then to Static Route. Click the Create button and create static routes accordingly.

Enable Tanzu Control Plane (Workload Management)

I am not going to go through the full deployment of workload management, these are similar steps detailed HERE . However, there are a few steps that are different:

  • On page 6 Choose Type=AVI as your Load Balancer type.
  • there is no load balancer IP Address range required, this is now provided by the NSX ALB.
  • the Certificate we need to provide, should be of NSX ALB which we created in previous step.

The new NSX Advanced Load Balancer is far superior to the HA-Proxy specially in provider environment. The providers can deploy, offer and manage K8 clusters with VMware supported LB type even though the configuration requires a few additional steps, it is very simple to setup. The visibility provided into health and usage of the virtual services are going to be extremely beneficial for day-2 operations, and should provide great insights for those providers who are responsible for provisioning and managing Kubernetes distributions running on vSphere. Feel free to share any feedback…

VMware Cloud Director Two Factor Authentication with VMware Verify

In this post, I will be configuring  two-factor authentication (2FA) for VMware Cloud Director using Workspace ONE Access formally know as VMware Identity Manager (vIDM). Two-factor authentication is a mechanism that checks username and password as usual, but adds an additional security control before users are authenticated. It is a particular deployment of a more generic approach known as Multi-Factor Authentication (MFA).Throughout this post, I will be configuring VMware Verify as that second authentication.

What is VMware Verify ?

VMware Verify is built in to Workspace ONE Access (vIDM) at no additional cost, providing a 2FA solution for applications.VMware Verify can be set as a requirement on a per app basis for web or virtual apps on the Workspace ONE launcher OR to login to Workspace ONE to view your launcher in the first place. The VMware Verify app is currently available on iOS and Android.VMware Verify supports 3 methods of authentication:

  1. OneTouch approval
  2. One-time passcode via VMware Verify app (soft token)
  3. One-time passcode over SMS

By using VMware Verify, security is increased since a successful authentication does not depend only on something users know (their passwords) but also on something users have (their mobile phones), and for a successful break-in, attackers would need to steal both things from compromised users.

1. Configure VMware Verify

First you need to download and install “VMware Workspace ONE Access“, which is very simple to deploy using ova. VMware Verify is provided as-a-service, and thus, it does not require to install anything on-premise server. To enable VMware Verify, you must contact VMware support. They will provide you a security token which is all you need to enable the integration with VMware Workspace One Access (vIDM).Once you get the token, login into vIDM as an admin user and go to:

  1. Click on the Identity & Access Management tab
  2. Click on the Manage button
  3. Select Authentication Methods
  4. Click on the configure icon (pencil) next to VMware Verify
    1. undefined
  5. A new window will pop-up, on which you need to select the Enable VMware Verify checkbox, enter the security token provided by VMware support, and click on Save.
    1. undefined
    2. undefined

2. Create a Local Directory on VMware Workspace One Access

VMware Workspace ONE Access not only supports Active Directories , LDAP Directories but also supports other types of directories, such as local directories and Just-in-Time directories.For this Lab , i am going to create a local directory using local directory feature of Workspace One Access,Local users are added to a local directory on the service. we need to manage the local user attribute mapping and password policies. You can create local groups to manage resource entitlements for users.

  1. Select the Directories tab
  2. Click on “Add Directory”
    1. undefined
  3. Specify Directory and domain name (this is same domain name i have registered for VMware Verify
    1. undefined

3. Create/Configure a built-in Identity Provider

Once the second authentication factor is enabled as described on steps 1 and 2, it must next be added as an authentication method to a Workspace one access built-in provider. If in your environment already exists one, you can re-configure it. Alternatively, you can create a new built-in identity provider as explained below.Login to Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Identity Providers link
  4. Click on the Add Identity Provider button and select Create Built-in IDP
    1. undefined
  5. Enter a name describing the Identity Provider (IdP)
  6. Which users can authenticate using the IdP – In the example below I am selecting the local directory that i had created above.
  7. Network ranges from which users will be directed to the authentication mechanism described on the IdP
  8. The authentication methods to associate with this IdP – Here I am selecting VMware Verify as well as Local Directory.
  9. Finally click on the Add button
    1. undefined

4. Update Access Policies on Workspace One Access

The last configuration step on Workspace One Access (vIDM) is to update the default access policy to include the second factor authentication mechanism. For that, login into Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Policies link
  4. Click on the Edit Default Policy button
  5. This will open up new page showing the details of the default access policy. Go to “Configuration” and Click on “ALL RANGES”.

A new window will pop-up. Modify the settings right below the line “then the user may authenticate using:”

  1. Select Password as the first authentication method – This way users will have to enter their ID and password as defined on the configured Local Directory
  2. and Select second authentication mechanism. here I am adding VMware Verify – This will make that after a successful password authentication, users will get a notification on their mobile phones to accept or deny the login request.
  3. I am leaving empty the line “If preceding Authentication Method fails or is not applicable, then:” – This is because I don’t want to configure any fallback authentication mechanism or you can choose based on your choice.

5. Download the app in your mobile and register a user from an Cloud Director Organization

  1. Access the app provider on your mobile phone. Search for VMware Verify and download it.
    1. undefined
  2. Once it is downloaded, open the application. It will ask for your mobile number and e-mail address. Enter your domain details. On the screenshot below, I’m providing my mobile number and an e-mail which is only valid in my lab.After clicking OK, you will be provided two options for verifying your identity:
    1. undefined
  1. Receiving and SMS message – SMS will have registration code which will allow you to enter in to APP along with registration code.
    1. undefined
  2. Receiving a Phone Call – after clicking on this option, the app will show a registration code you will need to type on the phone pad once you receive the call
  3. Since i am using SMS way to doing it , it will ask you to Enter the code which you have received in SMS Manually (XopRcVjd4u2)
    1. undefinedundefined
  4. Once your identity has been verified, you will be asked to protect the app by setting a PIN number. After that, the app will show there are not accounts configured yet.
    1. undefinedundefined
  5. Click on Account and add the account
    1. undefinedundefined

Immediately after that, we will start receiving tokens on the VMware Verify mobile app. so at this moment, you are ready to move to the next step.

6. Enable VMware Cloud Director Federation with VMware Workspace ONE Access

There are three authentication methods that are supported by vCloud Director:

Local: local users which are created at the time of installing vCD or while creating any new organization.

LDAP service:  LDAP service enables the organisations to use their own LDAP servers for authentication. Users can then be imported into vCD from the configured LDAP.

SAML Identity Provider: A SAML Identity Provider can be used to authenticate users in organisations. SAML v2.0 metadata is required for the service to be configured. The metadata must include the location of the single sign-on service, the single logout service, and the X.509 certificate for the service. In this post we will be using federation between VMware Workspace One Access with VMware Cloud Director.

So, let’s go ahead and login to VMware Cloud Director Organization and go to “Administration” and Click on “SAML”

  1. Enable Federation by setting “Entity ID” to any other unique string , in this case i am setting “org name” , in my case my org name is “abc”
  2. Then click on “Generate” to generate a new certificate and click “SAVE”
    1. undefined
  3. Download Metadata from the link , It will download file “spring_saml_metadata.xml“. This activity can be performed by system or Org Administrator.
    1. undefined
  4. In VMware Workspace ONE Access(VIDM) admin console, go to “Catalog” and create new web application.
    1. Write application name, description and upload nice icon and choose category.
    1. undefined
  5. In the next screen keep Authentication Type SAML 2.0 and paste the xml metadata downloaded in step #1 into the URL/XML window. Scroll down to Advanced Properties. 
    1. undefined 
  6. In Advanced Properties we will keep the defaults but add Custom Attribute Mappings which describe how VIDM user attributes will translate to VCD user attributes. Here is the list:
    1. undefined
  7. Now we can finish the wizard by clicking next, select access policy (keep default) and reviewing the Summary on the next screen.
    1. undefined
  8. Next we need to retrieve metadata configuration of VIDM – this is by going back to Catalog and clicking on Settings. From SAML Metadata download Identity Provider (IdP) metadata.
    1. undefined
  9. Now we can finalize SAML configuration in vCloud Director. on Federation page Toggle Use SAML Identity Provider button to enable it and import the downloaded metadata (idp.xml) with Browse and Upload buttons and click Apply.
    1. undefined
  10.  we first need to import some users/groups to be able to use SAML. You can import VMware Workspace ONE Access(VIDM) users by their user name or group. We can also assign role to the imported user.
    1. undefined

This completes the federation process between VMware Workspace ONE Access (VIDM) and VMware Cloud Director. For More details you can refer This Blog Post.

Result – Cloud Director Two Factor Authentication in Action

Lets your tenant go to browser and browse their tenant URL, they will get atomically redirected to VMware Workspace ONE Access page for authentication:

  1. User enters user name and password and if user get successfully authenticated , if moves to 2FA
  2. on the next step, user gets a notification on thier mobile phones
    1. undefined
  3. Once user approves the authentication on the phone , VMware Workspace ONE Access allows access to user based on the role given on VMware Cloud Director.

On-Board a New User

  1. Create a new User in VMware Workspace and also add him to application access.
    1. undefined
    2. undefined
  2. User gets an email to setup his/her password, user must configure his/her password.
    1. undefined
  3. Administrator login to Cloud Director and Import newly created user from SAML with a Cloud Director role
    1. undefined
    2. undefined
  4. User browses cloud URL and after user logs in to portal with user id and password, he/she asked to provide mobile number for second factor authentication.
    1. undefinedundefined
  5. After entering mobile number , if user has installed “VMware Verify” app , he/she get notification for Approve/Deny or if app has not been installed , click on “Sign in with SMS” , user will receive an SMS , enter that SMS for second factor authentication.
    1. undefinedundefinedundefined
  6. Once user enters the passcode received on his/her cell phone, VMware Workspace One Access allow user to login to cloud director.
    1. undefined

This completes the installation and configuration of VMware Verify with VMware Cloud Director. you can add additional things like branding of your cloud etc.. which will give this your cloud identity.

vSphere 6.5 Encryption using HyTrust KeyControl 4.1 – Part-2

In the Part-1 we configured HyTrust KeyControl Cluster , now lets configure this cluster in vCenter and configure encryption for Virtual Machines..

Lets create a user to be utilize with vCenter –  click on the Users tab to create a new user. Click on the Actions drop down button and select, Create User.

1.png

Create a user called the same name as the VMware VCSA name for ease of use.

NOTE – Do NOT specify a password, else trust will fail.

2.png

Highlight the newly created user, click the Actions dropdown button, then click the Download Certificate option. This will download the certificate created for that user. A zip file containing the Certificate of Authority (CA) will be downloaded.

3.png

Once you have downloaded Certificate , Log in to the VCSA, highlight the vCenter on the left hand pane, click on the configure tab on the right hand pane, click on Key Management Servers, then click the Add KMS button.

4.png

Enter a Cluster name, Server Alias, Fully Qualified Domain Name (FQDN)/IP of the server, and the port number. Leave the other fields as the default, then click OK.

5.png

Click on Yes to set the KMS cluster “Hytrust” as the default.

6.png

Click on Trust to trust the Certificate from HyTrust KeyControl.

7.png

Now we have to establish the trust relationship between vCenter and HyTrust KeyControl. Highlight the KeyControl appliance, click on All Actions, then click on Establish trust with KMS.

8.png

Select the Upload certificate and private key option, then click OK.

9.png

Click on Upload file button

10

Browse to where the CA file was previously generated, select the “vcenter”.pem file, then click Open.

11.png

Repeat the process for the private key by clicking on the second Upload file button and Verify that both fields are populated with the same file, then click OK.

13.png

You will now see that the Connection status is shown as Normal indicating that trust has been established.Hytrust KeyControl is now set up as the Key Management Sever (KMS) for vCenter.

14.png

Now we successfully add one Node of cluster , add another node by following the same steps..

15.png

Let’s You can now begin to encrypting virtual machines with vSphere 6.5 which i will be covering in the next post. Happy Learning 🙂

 

 

vSphere 6.5 Encryption using HyTrust KeyControl 4.1 – Part-1

HyTrust KeyControl supports a fully functional KMIP server that can be deployed as a vSphere Key Management Server and once deployment is completed and a trusted connection between KeyControl and vSphere has been established, KeyControl can manage the encryption keys for virtual machines in the cluster that have been encrypted with vCenter Server for vSphere Virtual Machine Encryption or VMware VSAN Encryption.

In this post we will deploy HyTrust KeyControl KMS server and setup KMS Cluster

There are two methods for installation of Key Control… either we can use OVA appliance or another Method to use ISO. in this Post we will use OVA method..

Open your vSphere Web Client and Click on “Deploy OVF Template”.1.png

Choose OVF2

Provide Name for the HyTrust KeyControl Appliance, select a deployment location, then click Next.3.png

Select the vSphere cluster or host Where you would like to install the HyTrust KeyControl appliance on, then click Next.

4.png

Review the details, then click Next.

5.png

Select the proper configuration from the drop down menu, then click Next.

6.png

Select the preferred storage and disk format for the KeyControl appliance, then click Next.

7.png

Select the appropriate network, enter appropriate network details then click Next.

9.png

Review the summary screen, if everything is correct, click Finish.

10.png

11

Appliance deployment is successfully completed.. since i am going to setup a cluster , so i would go ahead and deploy another appliance using the same procedure…

One Both the appliance has been deployed , power on the newly created HyTrust KeyControl appliance, then open a console to the KeyControl appliance. Set the system password, then press OK.

12.png

Since this is the First Node ,Select No, then press enter.

13.png

Review the Appliance Configuration, then press OK.

14.png

Now First KeyControl appliance is configured and you can now move to the KeyControl WebGUI. Open a web browser and navigate to the IP or FQDN of the KeyControl appliance. Use the the following credentials to initially log in:
Username: secroot
Password: secroot

15.png

After login , read and accept the EULA by clicking on I Agree at the bottom of the agreement.

16.png

Enter a new password for the secroot account, then click Update Password.

17

Now we successful setup our first Node..

18

Setup Cluster

Power on the second appliance and follow all the steps as above , except.. Click “YES” Here.

19.png

This will take us to the process of Cluster process..

20.png

Enter the IP address of First Node.

21.png

The final piece of information required is the passphrase. We would require a minimum of 16 characters.

25.png

The node must now be authenticated through the webGUI, as the following message indicates:

23

At this point you need to log on to the webGUI console of First Node with Administration privileges. The new KeyControl node will automatically appear as an unauthenticated node in the KeyControl cluster, as shown below:

26.png

To authenticate this new node, click the Actions Button and then click Authenticate. This will take you to the authentication screen shown below. You are prompted to enter the Authentication Passphrase.

2728.png

On the new KeyControl’s console, you will see a succession of status messages, as shown below:

30.png

Once authentication completes, the KeyControl node is listed as Authenticated but Unreachable until cluster synchronization completes and the cluster is ready for use. This should not take more than a minute or two. Then it will show as Authenticated and Online.Once the KeyControl node is available, the status will automatically move to Online and the cluster status at the top right of the screen will change back to Healthy.

31.png

At this point, the new cluster/node is ready to use.

Now Click on the KMIP button on the toolbar to configure the KMIP.

32.png

Enable KMIP by changing the state from disabled to enabled, then click save, then click Apply.NOTE: Take note of the port number 5696 and have it handy. You will specify this port number in the vCenter\VCSA configuration, later on.
33.png

Now we have successfully setup KMS Cluster.34.png

In the next post , we will use this cluster for vSphere to use as KMS server. Happy Learning 🙂

 

 

VCAP6-DCV Design Exam Experience

Yesterday sat for VMware Certified Advanced Professional 6 — Data Center Virtualization Design exam and really happy to share that i have passed this difficult exam.

In general, the exam is 175 minutes where you have to complete 18 questions. since India being an non-native English speaking country you will get 30 Minutes extra. exam was having around 5 or 6 design type questions compare to a single master question in 5.5.

People who are preparing for this exam , you must focus on design attribute like Availability , Manageability , Performance , Recoverability and Security , Functional and non-functional requirements , assumptions , constrains and risks.

I have referred below documents for the preparation:

VCAP6-Design Blueprint and all associated documents.

Function vs non-functional – https://communities.vmware.com/servlet/JiveServlet/downloadBody/17409-102-2-22494/Functional%20versus%20Non-functional.pdf

HIPAA security guide – http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/product-applicability-guide-hipaa-hitech-technical-guide.pdf

Still VCAP5-DCD cert guide is a good resource to understand the design framework and it really helps in Exam.

Few tips:

  1. Be fresh and rested as there are 205 minutes, it’s quite a long to sit in front of the screen.
  2. Stay focused and read carefully all the questions and instructions at least twice.
  3. i would suggest to start from the design questions which would take a little bit more time.

 

Create VM Using PowerCli and an XML File

To create a VM from a XML file , first we have to create a xml file with the settings for the VM. I created a very basic on that only specifies the name and size of the vDisk.

<CreateVM>
<VM>
<Name>MyVM1</Name>
<HDDCapacity>1</HDDCapacity>
</VM>
<VM>
<Name>MyVM2</Name>
<HDDCapacity>1</HDDCapacity>
</VM>
</CreateVM>

Lets first read the Content of XML file using :

PowerCLI C:\>[xml]$s = Get-Content c:\avn\myvm.xml

Create the virtual machines. using below power cli simple single line script:

PowerCLI C:\> $s.CreateVM.VM | foreach { New-VM -VMHost esxcomp-01a.corp.local -Name $_.Name -DiskMB $_.HDDCapacity}

Here is the Result of successful creation of VMs…

powercli_xml

You can expand the xml file to include a host, disk types,CPU,memory and other settings……

Custom TCP/IP Stacks

vSphere at 5.1 and earlier, there was a single TCP/IP stack which was being used by all the different types of network traffics. This meant that management, virtual machine (VM) traffic, vMotion, NFC etc.. were all fixed to use the same stack. and because of the shared stack, all VMkernel interfaces had some things in common: they have to have the same default gateway, the same memory heap, and the same ARP and routing tables.

from vSphere 5.5 onward , VMware has changed this functionality, allowing for multiple TCP/IP stacks, but with some limitations. Only certain types of traffic could make use of a stack other than the default one, and the custom TCP stack had to be created from the command line using an ESXCLI command. 

In vSphere 6, VMware went ahead and created separate vMotion and Provisioning stacks by default , and when you deploy NSX , this will create its own stack.

NOTE – Custom TCP/IP stacks aren’t supported for use with fault tolerance logging, management traffic, Virtual SAN traffic, vSphere Replication traffic, or vSphere Replication NFC traffic.

Create a Custom TCP/IP Stack:

To create a new TCP/IP stack we must use the ESXCLI…

>esxcli network ip network add -N=”stack name”

1

2.GIF

Modify Custom TCP/IP Stack Settings:

Now since you have created your custom stack , it needs to be configured with settings like which DNS settings to use, which address to use as the default gateway etc. in advanced settings you can configure , which congestion control algorithm to use, or the maximum number of connections that can be active at any particular time.

To configure these settings, GO to Manage > Networking > TCP/IP configuration and highlight the your stack to be configured.

3.GIF

on the edit page , these settings can be modified:

  • Name. Change Name if Required.
  • DNS Configuration.use DHCP, so the custom TCP/IP stack can pickup the settings from DHCP or  static DNS servers with search domains.
  • Default Gateway. One of the primary reasons for creating a separate TCP/IP stack from the default one and this can be configured here.
  • Congestion Control Algorithm. The algorithm specified here affects vSphere’s response when congestion is suspected on the network stack.
  • Max Number of Connections. The default is set to 11,000.

45.GIF67.GIF

Configuring custom TCP/IP stacks also will help you have separate gateway for each kind of traffic if that’s the way network has been designed.

VMware vSphere Replication 6.5 released with 5 minute RPO

I was going through the release notes of vSphere Replication 6.5 and really surprised to see the RPO of this new release…and thought of worth sharing:

VMware vSphere Replication 6.5 provides the following new features:

  • 5-minute Recovery Point Objective (RPO) support for additional data store types – This version of vSphere Replication extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5. This allows customers to replicate virtual machine workloads with an RPO setting as low as 5-minutes between these various data store options.

surely  a great reason to upgrade to vSphere 6.5 and identify which all your VMs can work with 5 minute RPO and use this free replication feature of vSphere suite.

VM Component Protection (VMCP)

vSphere 6.0 has a powerful  feature as a part of vSphere HA called VM Component Protection (VMCP). VMCP protects virtual machines from storage related events, specifically Permanent Device Loss (PDL) and All Paths Down (APD) incidents.

Permanent Device Loss (PDL)
A PDL event occurs when the storage array issues a SCSI sense code indicating that the device is unavailable. A good example of this is a failed LUN, or an administrator inadvertently removing a WWN from the zone configuration. In the PDL state, the storage array can communicate with the vSphere host and will issue SCSI sense codes to indicate the status of the device. When a PDL state is detected, the host will stop sending I/O requests to the array as it considers the device permanently unavailable, so there is no reason to continuing issuing I/O to the device.

All Paths Down (APD)
If the vSphere host cannot access the storage device, and there is no PDL SCSI code returned from the storage array, then the device is considered to be in an APD state. This is different than a PDL because the host doesn’t have enough information to determine if the device loss is temporary or permanent. The device may return, or it may not. During an APD condition, the host continues to retry I/O commands to the storage device until the period known as the APD Timeout is reached. Once the APD Timeout is reached, the host begins to fast-fail any non-virtual machine I/O to the storage device. This is any I/O initiated by the host such as mounting NFS volumes, but not I/O generated within the virtual machines. The I/O generated within the virtual machine will be indefinitely retried. By default, the APD Timeout value is 140 seconds and can be changed per host using the Misc.APDTimeout advanced setting.

VM Component Protection (VMCP)
vSphere HA can now detect PDL and APD conditions and respond according to the behavior that you configure. The first step is to enable VMCP in your HA configuration. This settings simply informs the vSphere HA agent that you wish to protect your virtual machines from PDL and APD events.

Cluster Settings -> vSphere HA -> Host Hardware Monitoring – VM Component Protection -> Protect Against Storage Connectivity Loss.

1.GIF

The next step is configuring the way you want vSphere HA to respond to PDL and ADP events.  Each type of event can be configured independently.  These settings are found on the same window that VMCP is enabled by expanding the Failure conditions and VM response section.

2

What setting and their explanation are available on this Blog.

Storage Protocol Comparison

Many of my friends was looking for something easy to understand Storage  Protocol Comparison , here is what i had created some time back for my reference:

  iSCSI NFS FIBRE CHANNEL FCOE
Description
iSCSI presents block devices to a VMware® ESXi™ host. Rather than accessing blocks from a local disk, I/O operations are carried out over a network using a block access protocol. In the case of iSCSI, remote blocks are accessed by encapsulating SCSI commands and data into TCP/IP packets.
NFS presents file devices over a network to an ESXi host for mounting. The NFS server/array makes its local file systems available to ESXi hosts. ESXi hosts access the metadata and files on the NFS array/server, using an RPC-based protocol.
Fibre Channel (FC) presents block devices similar to iSCSI. Again, I/O operations are carried out over a network, using a block access protocol. In FC, remote blocks are accessed by encapsulating SCSI commands and data into FC frames. FC is commonly deployed in the majority of mission-critical environments.
Fibre Channel over Ethernet (FCoE) also presents block devices, with I/O operations carried out over a network using a block access protocol. In this protocol, SCSI commands and data are encapsulated into Ethernet frames. FCoE has many of the same characteristics as FC, except that the transport is Ethernet.
Implementation Options
Network adapter with iSCSI capabilities, using software iSCSI initiator and accessed using a VMkernel (vmknic) port.               or:                                                      • Dependent hardware iSCSI initiator.                                          or:                                                         • Independent hardware iSCSI initiator.
Standard network adapter, accessed using a VMkernel port (vmknic).
Requires a dedicated host bus adapter (HBA) (typically two, for redundancy and multipathing).
• Hardware converged network adapter (CNA).                                                 • Network adapter with FCoE capabilities, using software FCoE initiator.
Performance Considerations
iSCSI can run over a 1Gb or a 10Gb TCP/IP network. Multiple connections can be multiplexed into a single session, established between the initiator and target. VMware supports jumbo frames for iSCSI traffic, which can improve performance. Jumbo frames send payloads larger than 1,500.
NFS can run over 1Gb or 10Gb TCP/IP networks. NFS also supports UDP, but the VMware implementation requires TCP. VMware supports jumbo frames for NFS traffic, which can improve performance in certain situations.
FC can run on F71Gb/2Gb/4Gb/8Gb and 16Gb HBAs,This protocol typically affects a host’s CPU the least,
because HBAs (required for FC)
handle most of the processing
(encapsulation of SCSI data into
FC frames).
This protocol requires 10Gb Ethernet. With FCoE, there is no IP encapsulation of the data as there is with NFS and iSCSI. This reduces some of the overhead/latency. FCoE is SCSI over Ethernet, not IP. This protocol also requires jumbo frames, because FC payloads are 2.2K in size and cannot be fragmented.
Error Checking
iSCSI uses TCP, which resends dropped packets.
NFS uses TCP, which resends dropped packets.
FC is implemented as a lossless network. This is achieved by throttling throughput at times of congestion, using B2B and E2E credits.
FCoE requires a lossless network. This is achieved by the implementation of a pause frame mechanism at times of congestion.
Security
iSCSI implements the Challenge Handshake Authentication Protocol (CHAP) to ensure that initiators and targets trust each other. VLANs or private networks are highly recommended, to isolate the iSCSI traffic from other traffic types.
VLANs or private networks are highly recommended, to isolate the NFS traffic from other traffic types
Some FC switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. VSANs are conceptually similar to VLANs. Zoning between hosts and FC targets also offers a degree of isolation.
Some FCoE switches support the concepts of a VSAN, to isolate parts of the storage infrastructure. Zoning between hosts and FCoE targets also offers a degree of isolation
ESXi Boot from SAN
Yes
No
Yes
Software FCoE – No Hardware FCoE (CNA) – Yes
Maximum Device Size
64TB
Refer to NAS array vendor or NAS server vendor for maximum supported datastore size. Theoretical size is much larger than 64TB but requires NAS vendor to support it.
64TB
64TB
Maximum Number of Devices
256
Default: 8 Maximum: 256
256
256
Storage vMotion Support
Yes
Yes
Yes
Yes
Storage DRS Support
Yes
Yes
Yes
Yes
Storage I/O Control Support
Yes
Yes
Yes
Yes
Virtualized MSCS Support
No. VMware does not support MSCS nodes built on virtual machines residing on iSCSI storage.
No. VMware does not support MSCS nodes built on virtual machines residing on NFS storage.
Yes. VMware supports MSCS nodes built on virtual machines residing on FC storage
No. VMware does not support MSCS nodes built on virtual machines residing on FCoE storage.

This has been taken from VMware’s Storage Protocol Comparison White Paper , please check this link for more details.