Security VDC Architecture with VMware Cloud Director

Featured

​Cloud Director VDCs come with all the features you’d expect from a public cloud, Virtual Data Center is a logical representation of a physical data center, created using virtualization technologies and a virtual data center allows IT administrators to create, provision, and manage virtualized resources such as servers, storage, and networking in a flexible and efficient manner. Recently released new version of VMware Cloud Director 10.4.1 released quite a lot of new features. In this article I want to double click on to external networking…

External Networks

An external network is a network that is external to the VCD infrastructure, such as a physical network or a virtual network, external networks are used to connect virtual machines in VCD with the external world, allowing them to communicate with the Internet or with other networks that are not part of the VCD infrastructure

New Features in Cloud Director 10.4.1 External Networks

With the release of Cloud Director 10.4.1, External networks that are NSX-T Segment backed (VLAN or overlay) can now be connected directly to Org VDC Edge and does not require routed through Tier-0 or VRF the Org VDC Gateway is connected to. This connection is done via the service interface on the service node of the Tier-1 GW that is backing the Org VDC Edge GW. The Org VDC Edge GW still needs a parent Tier-0/VRF although it can be disconnected from it. here are some of the use cases some of the use cases of the external network we are going to discusses…

  • Transit connectivity across multiple Org VDC Edge Gateways to route between different Org VDCs
  • Routed connectivity via dedicated VLAN to tenant’s co-location physical servers
  • Connectivity towards Partner Service Networks
  • MPLS connectivity to direct connect while internet is accessible via shared provider gateway

Security VDC Architecture using External Networks (transit connectivity across two Org VDC Edge Gateways)

A Security VDC is a common strategy for connecting multiple VDCs and security VDC become single egress and ingress points as well as can deploy additional firewall etc, in below section i am showing how that can be achieved using new external network feature:

  • This is using Overlay (Geneve) backed external network
  • ​This Overlay network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set default route to the segment backed external network you need to use two more specific routes. For example:
  • Ø0.0.0.0/1 next hop <IP> scope <network>
  • Ø128.0.0.0/1 next hop <IP> scope <network>

Security VDC Architecture using External Networks – Multi-VDC

  1. Log in to the VMware Cloud Director Service Provider Admin Portal.
  2. From the top navigation bar, select Resources and click Cloud Resources.
  3. In the left pane, click External Networks and click New.

On the Backing Type page, select NSX-T Segments and a registered NSX Manager instance to back the network, and click Next.

Enter a name and a description for the new external network

Select an NSX segment from the list to import and click Next. An NSX segment can be backed either by a VLAN transport zone or by an overlay transport zone

  1. Configure at least one subnet and click Next.
    1. To add a subnet, click New.
    2. Enter a gateway CIDR.
    3. To enable, select the State checkbox.
    4. Configure a static IP pool by adding at least one IP range or IP address.
    5. Click Save.
    6. (Optional) To add another subnet, repeat steps Step A to Step E.

Review the network settings and click Finish.

Now provider will need to go to tenant org/vdc and add above configured external network in to tenant tier1 edge and offer net new networking configuration and options.

Other Patterns

Routed connectivity via dedicated VLAN to tenant’s co-location physical servers or MPLS

  • This is using vLAN backed external network
  • ​This vLAN backed network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​One VLAN segment, it can be connected to only one Org VDC Edge GW
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set default route to the segment backed external network. For example:
  • Ø172.16.50.0/24 next hop <10.10.10.1> scope <external network>

Connectivity towards Partner Service Networks

  • This is using vLAN backed external network
  • ​This vLAN backed network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​One VLAN segment, it can be connected to only one Org VDC Edge GW
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set static route to the segment backed external network you need to use two more specific routes. For example:
  • Ø<Service Network> next hop <Service Network Router IP> scope <External Network>

DOWNLOAD PDF from Here

i hope this article helps providers offer net new additional network capabilities to your tenants. please feel free to share feedback.

Advertisement

Getting Started with VMware Cloud Director Container Service Extension 4.0

Featured

VMware Cloud Director Container Service Extension brings Kubernetes as a service to VMware Cloud Director, offering multi-tenant, VMware supported, production ready, and compatible Kubernetes services with Tanzu Kubernetes Grid. As a service provider administrator, you can add the service to your existing VMware Cloud Director tenants. By using VMware Cloud Director Container Service Extension, customers can also use Tanzu products and services such as Tanzu® Mission Control to manage their clusters.

Pre-requisite for Container Service Extension 4.0

  • Provider Specific Organization – Before you can configure VMware Cloud Director Container Service Extension server, it is must to create an organization to hosts VMware Cloud Director Container Service Extension server
  • Organization VDC within Organization – Container Service extension Appliance will be deployed in this organization virtual data center
  • Network connectivity – Network connectivity between the machine where VMware Cloud Director Container Service Extension is installed, and the VMware Cloud Director server. VMware Cloud Director Container Service Extension communicates with VMware Cloud Director using VMware Cloud Director public API endpoint
  • CSE 4.0 CPI automatically creates Load balancer, you must ensure that you have configured  NSX Advanced Load Balancer, NSX Cloud, and NSX Advanced Load Balancer Service Engine Group for tenants who need to create Tanzu Kubernetes Cluster.

Provider Configuration

With the release of VMware Cloud Director Container Service Extension 4.0, service providers can use the CSE Management tab in the Kubernetes Container Clusters UI plug-in, which demonstrate step by step process to configure the VMware Cloud Director Container Service Extension server.

Install Kubernetes Container Clusters UI plug-in for VMware Cloud Director

You can download the Kubernetes Container Clusters UI plug-in for the VMware Cloud Director Download Page and upload the plug-in to VMware Cloud Director.

NOTE: If you have previously used the Kubernetes Container Clusters plug-in with VMware Cloud Director, it is necessary to deactivate it before you can activate a newer version, as only one version of the plug-in can operate at one time in VMware Cloud Director. Once you activate a new plug-in, it is necessary to refresh your Internet browser to begin using it.

Once partner has installed plugin, The Getting Started section with in CSE Management page help providers to learn and set up VMware Cloud Director Container Service Extension in VMware Cloud Director through the Kubernetes Container Clusters UI plug-in 4.0. At very High Level this is Six Step process:

Lets start following these steps and deploy

Step:1 – This section links to the locations where providers can download the following two types of OVA files that are necessary for VMware Cloud Director Container Service Extension configuration:

NOTE- Do not download FIPS enabled templates

Step:2 – Create a catalog in VMware Cloud Director and upload VMware Cloud Director Container Service Extension OVA files that you downloaded in the step:1 into this catalog

Step:3 – This section initiates the VMware Cloud Director Container Service Extension server configuration process. In this process, you can enter details such as software versions, proxy information, and syslog location. This workflow automatically creates a Kubernetes Clusters rights bundle, CSE Admin Role role, Kubernetes Cluster Author role, and VM sizing policies. In this process, the Kubernetes Clusters rights bundle and Kubernetes Cluster Author role are automatically published to all tenants as well as following Kubernetes resource versions will be deployed

Kubernetes ResourcesSupported Versions
Cloud Provider Interface (CPI)1.2.0
Container Storage Interface (CSI)1.3.0
CAPVCD1.0.0

Step:4 – This section links to the Organization VDCs section in VMware Cloud Director, where you can assign VM sizing policies to customer organization VDCs. To avoid resource limit errors in clusters, it is necessary to add Tanzu Kubernetes Grid VM sizing policies to organization virtual data centers.The Tanzu Kubernetes Grid VM sizing policies are automatically created in the previous step. Policies created are as below:

Sizing PolicyDescriptionValues
TKG smallSmall VM sizing policy2 CPU, 4 GB memory
TKG mediumMedium VM sizing policy2 CPU, 8 GB memory
TKG largeLarge VM sizing policy4 CPU, 16 GB memory
TKG extra-largeX-large VM sizing policy8 CPU, 32 GB memory

NOTE: Providers can create more policies manually based on requirement and publish to tenants

In VMware Cloud Director UI, select an organization VDC, and from the left panel, under Policies, select VM Sizing and Click Add and then from the data grid, select the Tanzu Kubernetes Grid sizing policy you want to add to the organization, and click OK.

Step:5 – This section links to the Users section in VMware Cloud Director, where you can create a user with the CSE Admin Role role. This role grants administration privileges to the user for VMware Cloud Director Container Service Extension administrative purposes. You can use this user account as OVA deployment parameters when you start the VMware Cloud Director Container Service Extension server.

Step:6 – This section links to the vApps section in VMware Cloud Director where you can create a vApp from the uploaded VMware Cloud Director Container Service Extension server OVA file to start the VMware Cloud Director Container Service Extension server.

  • Create a vApp from VMware Cloud Director Container Service Extension server OVA file.
  • Configure the VMware Cloud Director Container Service Extension server vApp deployment lease
  • Power on the VMware Cloud Director Container Service Extension server.

Container Service Extension OVA deployment

Enter a vApp name, optionally a description, runtime lease and storage lease (should be no lease so that it does not suspend automatically), and click Next.

Select a virtual data center and review the default configuration for resources, compute policies, hardware, networking, and edit details where necessary.

  • In the Custom Properties window, configure the following settings:
    • VCD host: VMware Cloud Director URL
    • CSE service account’s username The username: CSE Admin user in the organization
    • CSE service account’s API Token: to generate API Token, login to provider session with CSE user which you created in Step:5 and then go to “User Preferences” and click on “NEW” in “ACCESS Tokens” section (When you generate an API Access Token, you must copy the token, because it appears only once. After you click OK, you cannot retrieve this token again, you can only revoke it. )

  • CSE service account’s org: The organization that the user with the CSE Admin role belongs to, and that the VMware Cloud Director Container Service Extension server deploys to.
  • CSE service vApp’s org: Name of the provider org where CSE app will be deployed

In the Virtual Applications tab, in the bottom left of the vApp, click Actions > Power > Start. this completes the vApp creation from the VMware Cloud Director Container Service Extension server OVA file. This task is the final step for service providers to perform before the VMware Cloud Director Container Service Extension server can operate and start provisioning Tanzu Kubernetes Clusters.

CSE 4.0 is packed with capabilities that address and attract developer personas with an improved feature set and simplified cluster lifecycle management. Users can now build and upgrade versions, resize, and delete K8s clusters directly from the UI making it simpler and faster to accomplish tasks than before. This completes provider section of Container Service extension in next blog post i will write about Tenant workflow.

Multi-Tenant Tanzu Data Services with VMware Cloud Director

Featured

VMware Cloud Director extension for VMware Data Solutions is a plug-in for VMware Cloud Director (VCD) that enables cloud providers expand their multi-tenant cloud infrastructure platform to deliver a portfolio of on-demand caching, messaging and database software services at massive scale. This brings in new opportunity for our Cloud Providers to offer additional cloud native developer services in addition to the VCD powered Infrastructure-as-a-Service (IaaS).

VMware Cloud Director extension for Data Solutions offers a simple tenant-facing self-service UI for the lifecycle management of below Tanzu data services with a single view across multiple instances, and with URL to individual instances for service specific management.

Tenant Self-Service Access to Data Solutions

Tenant users can access VMware Cloud Director extension for Data Solutions from VMware Cloud Director tenant portal

before tenant user can deploy any of the above solution, he/she must need to prepare their Tanzu K8s clusters deployed by CSE, basically when you click on Install Operator for a Kubernetes cluster for VMware Cloud Director extension for Data Solutions, Data Solutions operator is automatically installed to this cluster and this Data Solution Operator is for life cycle management of data services, to install operator simple log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director and then:

  1. Click Settings > Kubernetes Clusters
  2. Select the Kubernetes cluster on which you want to deploy Data Services
  3. and click Install Operator.

It takes a few minutes for the status of the cluster to change to Active.

Deploy a Tanzu Data Services instance

Go to Solutions and choose your required solution and click on “Launch”

This will take you to “Instances” page there , enter the necessary details.

  • Enter the instance name.
  • Solution should have RabbitMQ selected
  • Select the Kubernetes cluster ( You can only select cluster which has Data Solutions Operator successfully installed
  • Select a solution template (T-Shirt sizes)

To customize, for example, to configure the instance RabbitMQ Management Console or Expose Load Balancer for AMQP click Show Advanced Settings and select appropriate option.

Monitor Instance Health using Grafana

Tanzu Kubernetes Grid provides cluster monitoring services by implementing the open source Prometheus and Grafana projects. Tenant can use the Grafana Portal to get insights about the state of the RabbitMQ nodes and runtime. For this to work, Grafana must be installed on CSE 4 Tanzu Cluster.

NOTE: Follow this link for Prometheus and Grafana installation on CSE Tanzu K8s clusters.

Connecting to RabbitMQ

Since during the deployment, i have exposed RMQ as “Expose Load Balancer for AMQP”, if you take a look in vcd load balancer configuration CSE automatically exposed RMQ as load balancer VIP and a NAT rule get created, so that you can access it from outside.

Provider Configuration

Before you start using VMware Cloud Director extension for Data Solutions, you must meet certain prerequisites:

  1. VMware Cloud Director version 10.3.1 or later.
  2. Container Service Extension version 4.0 or later to your VMware Cloud Director.
  3. A client machine with MacOS or Linux, which has a network connectivity to VMware Cloud Director REST endpoint.
  4. Verify that you have obtained a VMware Data Solutions account.

Detailed instruction of installing VMware Cloud Director extension for VMware Data Solutions detailed Here.

VMware Cloud Director extension for VMware Data Solutions comes with zero additional cost to our cloud providers. Please note that the extension does not come with a cost, however, cloud providers need to report their service consumption of Data Services which do carry a cost.

VMware Cloud Director Charge Back Explained

Featured

VMware Chargeback not only enables metering and chargeback capabilities, but also provides visibility into infrastructure usage through performance and capacity dashboards for the Cloiud Providers as well as tenants.

To help Cloud Providers and tenants realise more value for every dollar they spend on infrastructure (ROI) (and in turn provide similar value to their tenants), our focus is to not only expand the coverage of services that can be priced in VMware Chargeback, but also to provide visibility into the cost of infrastructure to providers, and billing summary to organizations, clearly highlighting the cost incurred by various business units. but before we dive in further to know what’s new with this release, please note:

  • vRealize Operations Tenant App is now rebranded to VMware Chargeback.
  • VMware Chargeback is also now available as a SaaS offering, The Software-as-a-Service (SaaS) offering will be available as early access, with limited availability, with the purchase or trial of the VMware Cloud Director™ service. See, Announcing VMware Chargeback for Managed Service Providers Blog.

Creation of pricing policy based on chargeback strategy

Provider administrator can create one or more pricing policies based on how they want to chargeback their tenants. Based on the vCloud Director allocation models, each pricing policy is of the type, Allocation pool, Reservation pool, or Pay-As-You-Go

NOTE – The pricing policies apply to VMs at a minimum granularity of five minutes. The VMs that are created and deleted within the short span of five minutes will still be charged.

CPU Rate

Provider can charge the CPU rate based on GHz or vCPU Counts

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the number of vCPUs used
  • Fixed Cost Fixed costs do not depend on the units of charging

Memory Rate

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied, values are: Usage, Allocation and Maximum from usage and allocation
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the memory allocated
  • Fixed Cost Fixed costs do not depend on the units of charging

Storage Rate

You can charge for storage either based on storage policies or independent of it.

  • This way of setting rates will be deprecated in the future release and it is advisable to instead use the Storage Policy option.
  • Select the Storage Policy Name from the drop-down menu.
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied. You can charge for used storage or configured storage of the VMs
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Add Slab you can optionally charge different rates depending on the storage allocated

Network Rate

Enter the External Network Transmit and External Network Receive rates.

Note: If your network is backed by NSX-T, you will be charged only for the network data transmit and network data receive.

  • Network Transmit Rate select the Change Period and enter the Default Base Rate as well as using slabs, you can optionally charge different rates depending on the network data consumed
  • Network Receive Rate select the Change Period and enter the Default Base Rate. as well as using slabs, you can optionally charge different rates depending on the network data consumed. Enter valid numbers for Base Rate Slab and click Add Slab.

Advanced Network Rate

Under Edge Gateway Size, enter the base rates for the corresponding edge gateway sizes

  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

Guest OS Rate

Use the Guest OS Rate to charge differently for different operating systems

  • Enter the Guest OS Name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

Cloud Director Availability

Cloud Director Availability is to set pricing for replications created from Cloud Director Availability

  • Replication SLA Profile – enter a replication policy name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

You can also charge for the storage consumed by replication objects in the Storage Usage Charge section.This is used to set additional pricing for storage used by Cloud Director Availability replications in Cloud Director. Please note that the storage usage defined in this tab will be added additionally to the Storage Policy Base Rate

vCenter Tag Rate

This section is used for Any additional charges to be applied on the VMs based on their discovered Tags from vCenter. (Typical examples are Antivirus=true, SpecialSupport=true etc)

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

VCD Metadata Rate

Use the VCD Metadata Rate to charge differently for different metadata set on vApps

NOTE- Metadata based prices are available in bills only if Enable Metadata option is enabled in vRealize Operations Management Pack for VMware Cloud Director.

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

One Time Fixed Cost

One time fixed cost used to charge for One time incidental charges on Virtual machines, such as creation/Setup charges, or charges for one off incidents like installation of a patch. These costs do not repeat on a recurring basis.

For values follow VCD METADATA and vCenter Tag section.

Rate Factors

Rate factors are used to either bump up or discount the prices either against individual resources consumed by the Virtual Machines, or whole charges against the Virtual Machine. Some examples are:

  • Increase CPU rate by 20% (Factor 1.2) for all VMs tagged with CPUOptimized=true
  • Discount overall charge on VM by 50% (Factor 0.5) for all Vms tagged with PromotionalVM=True
  • VCD Metadata
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number
  • vCenter Tag
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number

Tanzu Kubernetes Clusters

This section will be used to charge for Tanzu K8s clusters and objects.

  • Cluster Fixed Cost
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost Fixed costs do not depend on the units of charging
  • Cluster CPU Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per ghz)
  • Cluster Memory Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per gb)

Additional Fixed Cost

You can use Additional Fixed Cost section to charge at the Org-VDC level. You can use this for charges such as overall tax, overall discounts, and so on. The charges can be applied to selective Org-VDCs based on Org-VDC metadata.

  • Fixed Cost
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost
  • VCD Metadata – enter the Tag Key and Tag Value
  • VCD Metadata One Time – enter the Tag Key and Tag Value

Apply Policy

Cloud Director Charge Back provides flexibility to the Service Providers to map the created pricing policies with specific organization vDC. By doing this, the service provider can holistically define how each of their customers can be charged based on resource types.

Bills

Every tenant/customer of service provider can see/review their bills using the Cloud Director Charge Back app. Service Provider administrator can generate bills for a tenant by selecting a specific resource and a pricing policy that must be applied for a defined period and can also log in to review the bill details.

This completes the feature demonstration available with Cloud Director Charge back. Go ahead and deploy and add native charge back power to your Cloud. 

AI/ML with VMware Cloud Director

Featured

AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries.

Why is AI/ML important?

it’s no secret that data is an increasingly important business asset, with the amount of data generated and stored globally growing at an exponential rate. Of course, collecting data is pointless if you don’t do anything with it, but these enormous floods of data are simply unmanageable without automated systems to help.

Artificial intelligence, machine learning and deep learning give organizations a way to extract value out of the troves of data they collect, delivering business insights, automating tasks and advancing system capabilities. AI/ML has the potential to transform all aspects of a business by helping them achieve measurable outcomes including:

  • Increasing customer satisfaction
  • Offering differentiated digital services
  • Optimizing existing business services
  • Automating business operations
  • Increasing revenue
  • Reducing costs

As modern applications become more prolific, Cloud Providers need to address the increasing customer demand for accelerated computing that typically requires large volumes of multiple, simultaneous computation that can be met with GPU capability.

Cloud Providers can now leverage vSphere support for NVIDIA GPUs and NVIDIA AI Enterprise (a cloud-native software suite for the development and deployment of AI and has been optimized and certified for VMware vSphere), This enables vSphere capabilities like vMotion from within Cloud Director to now deliver multi-tenancy GPU services which are key to maximizing GPU resource utilization. With Cloud Director support for the NVIDIA AI Enterprise software suite, customers now have access to best-in-class, GPU optimized AI frameworks and tools and to deliver compute intensive workloads including artificial intelligence (AI) or machine learning (ML) applications within their datacenters.

This solution with NVIDIA takes advantage of NVIDIA MIG (Multi-instance GPU) which supports spatial segmentation between workloads at the physical level inside a single device and is a big deal for multi-tenant environments driving better optimization of hardware and increased margins. Cloud Director is reliant on host pre-configuration for GPU services included in NVIDIA AI Enterprise which contains vGPU technology to enable deployment/configuration on hosts and GPU profiles.

Customers can self serve, manage and monitor their GPU accelerated hosts and virtual machines within Cloud Director. Cloud Providers are able to monitor (through vCloud API and UI dashboard) NVIDIA vGPU allocation, usage per VDC and per VM to optimize utilization and meter/bill (through vCloud API) NVIDIA vGPU usage averaged over a unit of time per tenant for tenant billing.

Provider Workflow

  • Add GPU devices to ESXi hosts in vCenter and install required drivers. 
  • Verify vGPU profiles are visible by going in to vCD provider portal → Resources → Infrastructure Resources → vGPU Profiles
  • Edit vGPU profiles to provide necessary tenant facing instructions and a tenant facing name to each vGPU profile. (Optional)
  • Create a PVDC backed by one or more clusters having GPU hosts in vCenter.
  • In provider portal → Cloud Resources → vGPU Policies → Create a new vGPU policy by following the wizards steps.

Tenant Workflow

When you create a vGPU policy, it is not visible to tenants. You can publish a vGPU policy to an organization VDC to make it available to tenants.

Publishing a vGPU policy to an organization VDC makes the policy visible to tenants. The tenant can select the policy when they create a new standalone VM or a VM from a template, edit a VM, add a VM to a vApp, and create a vApp from a vApp template. You cannot delete a vGPU policy that is available to tenants.

  • Publish the vGPU policy to one or more tenant VDCs similar to the way we publish sizing and placement policies.
  • Create a new VM or instantiate a VM from template. In vGPU enabled VDCs, tenants can now select a vGPU policy

Cloud Director not only allows for VMs but providers can also leverage cloud director’s Container Service Extension to offer GPU enabled Tanzu Kubernetes Clusters.

Step-by-Step Configuration

Below video covers step-by-step process of configuring provider and tenant side of configuration as well as deploying Tensor flow GPU in to a VM.

Cloud Director OIDC Configuration using OKTA IDP

Featured

OpenID Connect (OIDC) is an industry-standard authentication layer built on top of the OAuth 2.0 authorization protocol. The OAuth 2.0 protocol provides security through scoped access tokens, and OIDC provides user authentication and single sign-on (SSO) functionality. For more refer here (https://datatracker.ietf.org/doc/html/rfc6749). There are two main types of authentication that you can perform with Okta:

  • The OAuth 2.0 protocol controls authorization to access a protected resource, like your web app, native app, or API service.
  • The OpenID Connect (OIDC) protocol is built on the OAuth 2.0 protocol and helps authenticate users and convey information about them. It’s also more opinionated than plain OAuth 2.0, for example in its scope definitions.

So If you want to import users and groups from an OpenID Connect (OIDC) identity provider to your Cloud Director system (provider) or Tenant organization, you must configure provider/tenant organization with this OIDC identity provider. Imported users can log in to the system/tenant organization with the credentials established in the OIDC identity provider.

We can use VMware Workspace ONE Access (VIDM) or any public identity providers, but make sure OAuth authentication endpoint must be reachable from the VMware Cloud Director cells.in this blog post we will use OKTA OIDC and configure VMware Cloud to use this OIDC for authentication.

Step:1 – Configure OKTA OIDC

For this blog post, i created an developer account on OKTA at this url –https://developer.okta.com/signup and once account is ready, follow below steps to add cloud director as an application in OKTA console:

  • In the Admin Console, go to Applications > Applications.
  • Click Create App Integration.
  • To create an OIDC app integration, select OIDC – OpenID Connect as the Sign-in method.
  • Choose what type of application you plan to integrate with Okta, in Cloud Director case Select Web Application.
  • App integration name: Specify a name for Cloud Director
  • Logo (Optional): Add a logo to accompany your app integration in the Okta org
  • Grant type: Select from the different grant type options
  • Sign-in redirect URIs: The Sign-in redirect URI is where Okta sends the authentication response and ID token for the sign-in request, in our case for provider https://<vcd url>/login/oauth?service=provider and incase if you are doing it for tenant then use https://<vcd url>/login/oauth?service=tenant:<org name>
  • Sign-out redirect URIs: After your application contacts Okta to close the user session, Okta redirects the user to this URI.
  • AssignmentsControlled access: The default access option assigns and grants login access to this new app integration for everyone in your Okta org or you can choose to Limit access to selected groups

Click Save. This action creates the app integration and opens the settings page to configure additional options.

The Client Credentials section has the Client ID and Client secret values for Cloud Director integration, Copy both the values as we enter these in Cloud Director.

The General Settings section has the Okta Domain, for Cloud Director integration, Copy this value as we enter these in Cloud Director.

Step:2 – Cloud Director OIDC Configuration

Now I am going to configure OIDC authentication for provider side of cloud provider and with very minor changes (tenant URL) it can be configured for tenants too.

Let’s go to Cloud Director and from the top navigation bar, select Administration and on the left panel, under Identity Providers, click OIDC and click CONFIGURE

General: Make sure that OpenID Connect  Status is active, and enter the client ID and client secret information from the OKTA App registration which we captured above.

To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL, for OKTA the URL would look this – https://<domain.okta.com>/.well-known/openid-configuration and click on NEXT

Endpoint: Clicking on NEXT will populate “Endpoint” information automatically, it is however, essential that the information is reviewed and confirmed. 

Scopes: VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information.enter the scope information, and click Next.

Claims: You can use this section to map the information VMware Cloud Director gets from the user info endpoint to specific claims. The claims are strings for the field names in the VMware Cloud Director response

This is the most critical piece of configuration. Mapping of this information is essential for VCD to interpret the token/user information correctly during the login process.

For OKTA developer account, user name is email id, so i am mapping Subject to email as below

Key Configuration:

OIDC uses a public key cryptography mechanism.A private key is used by the OIDC provider to sign the JWT Token and it can be verified by a 3rd party using the public keys published on the OIDC provider’s well-known URL.These keys form the basis of security between the parties. For security to be maintained, this is required to keep the private keys protected from any cyber-attacks.One of the best practices that has been identified to secure the keys from being compromised is known as key rollover or key Refresh.

From VMware Cloud Director 10.3.2 and above, if you want VMware Cloud Director to automatically refresh the OIDC key configurations, turn on the Automatic Key Refresh toggle.

  • Key Refresh Endpoint should get populated automatically as we choose auto discovery.
  • Select a Key Refresh Strategy.
    • AddPreferred option, add the incoming set of keys to the existing set of keys. All keys in the merged set are valid and usable.
    • Replace – Replace the existing set of keys with the incoming set of keys.
    • Expire After – You can configure an overlap period between the existing and incoming sets of keys. You can configure the overlapping time using the Expire Key After Period, which you can set in hourly increments from 1 hour up to 1 day.

If you did not use Configuration Discovery in Step 6, upload the private key that the identity provider uses to sign its tokens and click on SAVE

Now go to Cloud Director, under Users, Click on IMPORT USERS and choose Source as “OIDC” and add user which is there in OKTA and Assign Role to that user, thats it.

Now you can logout from the vCD console and try to login again, Cloud Director automatically redirects to OKTA and asks for credential to validate.

Once the user is authenticated by Okta, they will be redirected back to VCD and granted access per rights associated with the role that was assigned when the user was provisioned.

Verify that the Last Run and the Last Successful Run are identical. The runs start at the beginning of the hour. The Last Run is the time stamp of the last key refresh attempt. The Last Successful Run is the time stamp of the last successful key refresh. If the time stamps are different, the automatic key refresh is failing and you can diagnose the problem by reviewing the audit events. (This is only applicable if Automatic Key Refresh is enabled. Otherwise, these values are meaningless)

Bring on your Own OIDC – Tenant Configuration

For tenant configuration, i have created a video, please take a look here, Tenant can bring their own OIDC and self service in cloud director tenant portal.

This concludes the OIDC configuration with VMware Cloud Director. I would like to Thank my colleague Ankit Shah, for his guidance and review of this document.

Windows Bare Metal Servers on NSX-T overlay Networks

Featured

In this post, I will configure Windows 2016/2019 bare metal server as an transport node in NSX-T and then also will configure a NSX-T overlay segment on a Windows 2016/2019 server bare metal server, which allow VM and bare metal server on the same network to communicate.

To use NSX-T Data Center on a windows physical server (Bare Metal server), let’s first understand few terminologies which we will use in this post.

  • Application – represents the actual application running on the physical server server, such as a web server or a data base server.
  • Application Interface – represents the network interface card (NIC) which the application uses for sending and receiving traffic. One application interface per physical server server is supported.
  • Management Interface – represents the NIC which manages the physical server server.
  • VIF – the peer of the application interface which is attached to the logical switch. This is similar to a VM vNIC.

Now lets configure our windows server to operate in NSX overlay environment:

Enable WinRM service on Windows 2019

First of all we need to enable Windows Remote Management (WinRM) on Windows Server 2016/2019 to allow the Windows server to interoperate with third-party software and hardware. To enable the WinRM service with a self-signed certificate, Run:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
PS$ wget -o ConfigureWinRMService.ps1 https://raw.github.com/vmware/bare-metal-server-integration-with-nsxt/blob/master/bms-ansible-nsx/windows/ConfigureWinRMService.ps1
PS$ powershell.exe -ExecutionPolicy ByPass -File ConfigureWinRMService.ps1.

Run the following command to verify the configuration of WinRM listeners:

winrm e winrm/config/listener

NOTE- For production bare metal servers, please enable winrm with HTTPS for security reasons and procedure is explained here

Installing NSX-T Kernel Module on Windows 2019 Server

Now let’s proceed with installing the NSX kernel module on the Windows Server 2016/2019 bare metal server. Make sure to download NSX kernel module for Windows server 2016/2019 with the same version of your NSX-T instance from VMware downloads

Start the installation of the NSX kernel module by executing the .exe file on your Windows BM server.

Configure the bare metal server as a transport node in NSX-T

Before we add the bare metal server as a transport node, we must need to create a new uplink profile in NSX-T that we are going to use for the bare metal servers. An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

In my Lab the windows 2016/2019 bare metal server will have two network adapters, one NIC in the management VLAN and the other one is on a TEP VLAN (VLAN160).

once uplink profile is configured, We can now proceed with adding the Windows 2016/2019 bare metal server as transport node into NSX-T. In the NSX-T web page go to System –> Fabric –> Nodes and click on +ADD

Enter Management Interface IP address of your windows bare metal host and its credential, and do not change the Installation Location, it will validate your credentials against windows BM and then will allow you to move next

On the next screen, choose virtual switch name or leave it default, select overlay transport zone as we are connecting this to overlay and select uplink profile and management uplink interface.

on the next screen, configure IP address, Subnet and GW for TEP interface, this could be using specifying static IP list or choosing an IP pool which belongs to TEP VLAN.

Click on Next , This will start preparing your Winodws BM for NSX-T

​Once preparation/config completed, we can attach segment from above screen or we can Continue Later, lets click on “Continue Later” for now, we will add in different step.

Now if you see your windows BM in NSX-T console, it is ready for NSX-T and asking us to attach an overlay segment.

Attach Overlay Segement

Select host in the “Host Transport Nodes” section and click on “Action” and then click on “Manage Segment” which takes you to same screen that SELECT SEGMENT would have during original deployment

now select which segment the Application Interface for the Physical Server will reside on and click on “ADD SEGMENT PORT”

​Add Segment Port and Attach Application Interface

On the add Segment port screen:

Choose Assign New IP (This will be your application IP on Windows BM) – > NSX Interface Name (Default is “nsx-eth”) – This is Application Interface Name on Physical Server

Default Gateway –> Provide – T0 or T1 Gateway address

IP Assignment – i am doing Static, but you can also do DHCP or IP Pool for application interface.

Save –Once save is pressed, configuration is sent to Physical Server and you can see on physical server that Application IP has been assigned to an virtual interface.

Now you can see host config in NSX-T Manager console, everything is green and showing up.

Now if you see you can reach to this Bare Metal from a VM with IP address “172.16.20.101” which is on the same segment as this physical server without doing bridging.

if you click on windows server , you can see other information and specifically the “Geneve Tunnels” between ESXi host on which VM is running and Windows BM host on which your application is running.

This completes the configuration, this gives customers/partners an opportunity to run VM and Bare Metal servers on same network and security (like micro-segmentation) can be managed from single console that is NSX-T console. i hope this helps. please share your feedback 🙂

Quick Tip – Delete Stale Entries on Cloud Director CSE

Featured

Container Service Extension (CSE) is a VMware vCloud Director (VCD) extension that helps tenants create and work with Kubernetes clusters.CSE brings Kubernetes as a Service to VCD, by creating customized VM templates (Kubernetes templates) and enabling tenant users to deploy fully functional Kubernetes clusters as self-contained vApps.

Due to any reason, if tenant’s cluster creation stuck and it continue to show “CREATE:IN_PROGRESS” or “Creating” for many hours, it means that the cluster creation has failed for unknown reason, and the representing defined entity has not transitioned to the ERROR state .

Solution

To fix this, provider admin need to get in to API and delete these stale entries, there are very few simple steps to clean those stale entries.

First – Let’s get the “X-VMWARE-VCLOUD-ACCESS-TOKEN” for API calls by calling below API call:

  • https://<vcd url>/cloudapi/1.0.0/sessions/provider
  • Authentication Type: Basic
  • Username/password – <adminid@system>/<password>

Above API call will return “X-VMWARE-VCLOUD-ACCESS-TOKEN”, inside header section of response window. copy this token and use as “Bearer” token in subsequent API calls.

Second – we need to get the “Cluster ID” of the stale cluster which we want to delete, and to get “Cluster ID” – Go in to Cloud Director Kubernetes Container Extension and click on cluster which is stuck and get Cluster IP in URN Format.

Third (Optional) – Get the cluster details using below API call and using authentication using Bearer token , which we first step:

Get  https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Fourth – Delete the stale cluster using below API call by providing “ClusterID“, which we captured in second step and using authenticate type a “Bearer Token

Delete https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Above API call should respond with “204 No Content”, it means API call has been executed sucessfully.

Now if you login to Cloud Director “Kubernetes Container Cluster” extension, above API call must have deleted the stale/stuck cluster entry

Now you can go to Cloud Director vAPP section and see if any vAPP/VM is running for that cluster , shutdown that VM and Delete it from Cloud Director. simple three API calls to complete the process.

VMware Cloud Director Assignable Storage Policies to Entity Types

Featured

Service providers can use storage policies in VMware Cloud Director to create a tiered storage offering like: Gold, Silver and Bronze or even offer dedicated storage to tenants. With the enhancement of storage policies to support VMware Cloud Director entities, Now providers has the flexibility to control how tenant use the storage policies. Providers can have not only tiered storage, but isolated storage for running VMs, containers, edge gateways, Catalog and so on.
A common use case that this Cloud Director 10.2.2 update addresses is the need for shared storage across clusters or offering lower cost storage for non-running workloads. For example, instead of having a storage policy with all VMware Cloud Director entities, you can break your storage policy into a “Workload Storage Policy” for all your running VMs and containers, and dedicate a “Catalog Storage Policy” for longer term storage. A slower or low cost NFS option can back the “Catalog Storage Policy”, while the “Workload Storage Policy”  can run on vSAN.

Starting with VMware Cloud Director 10.2.2, if Provider do not want a provider VDC storage policy to support certain types of VMware Cloud Director entities, you can edit and limit the list of entities associated with the policy, here is the list of supported entity types:

  • Virtual Machines – Used for VMs and vApps and their disks
  • VApp/VM Templates – Used for vApp Templates
  • Catalog Media – Used for Media inside catalogs
  • Named Disks – Used for Named disks
  • TKC – Used for TKG Clusters
  • Edge Gateways – Used for Edge Gateways

You can limit the entity types that a storage policy supports to one or more types from this list. When you create an entity, only the storage policies that support its type are available.

Use Case – Catalog-Only Storage Policy

There are many use cases of using assignable storage policies, i am demonstrating this one because many of providers has asked this feature in my discussions. so for this use case we will take entity type – Media, VApp Template

Adding Media, VApp Template entity types to a storage policy marks a storage policy as being able to be used with VDC catalogs.These entity types can be added at the PVDC storage policy layer. Storage policies that are associated with datastores that are intended for catalog-only storage can be marked with these entity types, to force only catalog-related items in to catalog only storage datastore.

When added: VCD users will be able to use this storage policy with Media/Templates. In other words, tenants will see this storage policy as an option when pre-provisioning their catalogs on a specific storage policy.

  • In Cloud Director provider portal, select Resources and click Cloud Resources.
  • select Provider VDCs, and click the name of the target provider virtual data center.
  • Under Policies, select Storage
  • Click the radio button next to the target storage policy, and click Edit Supported Types.
  • From the Supports Entity Types drop-down menu, select Select Specific Entities.
  • Select the entities that you want the storage policy to support, and click Save.

Validation

Lets validate this functionality by login as a tenant and go into “Storage Policies” settings and here we can see this Org VDC has two storage Policy assigned by provider.

Now lets deploy a virtual machine in the same org VDC and you can see that policy “WCP” which was marked as catalog only is not available for VM provisioning.

In the same Org VDC lets create a new “Catalog”, here you can see both the policies are visible, one exclusively for “Catalog” and another one which is allowed for all entity types.

Policy Removal: VCD users will not be able to use this storage policy with Media/Templates but whatever is there will continue to be there.

This addition to Cloud Director gives opportunity to providers to manage storage based on entity type , This is the one use case similarly one particular storage can be used for Edge Placement , another one could be used to spin up production grade Tanzu Kubernetes Cluster while default storage can be used by CSE native Kubernetes cluster for development container workloads. This opens up new Monetization opportunities for provider, upgrade your Cloud Director environment and start monetizing.

This Post is also available as Podcast

Auto Scale Applications with VMware Cloud Director

Featured

Starting with VMware Cloud Director 10.2.2, Tenants can auto scale applications depending on the current CPU and memory utilization. Depending on predefined criteria for the CPU and memory use, VMware Cloud Director can automatically scale up or down the number of VMs in a selected scale group.

Cloud Director Scale Groups are a new top level object that tenants can use to implement automated horizontal scale-in and scale-out events on a group of workloads. You can configure auto scale groups with:

  • A source vApp template
  • A load balancer network
  • A set of rules for growing or shrinking the group based on the CPU and memory use

VMware Cloud Director automatically spins up or shuts down VMs in a scaling group based on above three settings.This blog post will help you to enable Scale Goups in Cloud Director and also we will configure Scale Groups.

Configure Auto Scale

Login to Cloud Director 10.2.2 cell with admin/root credential and eanble metric data collection and publishing by setting up the metrics collection in a Cassandra database or collect metrics without metrics data persistence. in this post we are going to configure without metrics data persistence, to collect metrics data without data persistence, run the following commands:

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.collect.only -v true 

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.publishing.enabled -v true

Now the second step is to create a file named “metrics.groovy” file in the /tmp folder of cloud director appliance with the following contents:

configuration {
    metric("cpu.ready.summation") {
        currentInterval=20
        historicInterval=20
        entity="VM"
        instance=""
        minReportingInterval=300
        aggregator="AVERAGE"
    }
}

Change the file permission appropriatly and import the file using below command:

$VCLOUD_HOME/bin/cell-management-tool configure-metrics --metrics-config /tmp/metrics.groovy

Lets enable auto scalling plugin by running below commands on Cloud Director Cell:

$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set enabled=true
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set username=<username> (should having admin priveldge account)
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --encrypt --set password=<password> ( Password for the above account)

If you see in the Cloud Director provider console, under customize portal , there will be a plugin which provider can enable to tenants who needs auto scale functionality for the applications or can be made available to all tenants.

Since Auto Scale released with VMware Cloud Director 10.2.2, which allows service providers to grant tenant rights to create scale groups.Under Tenant Access Control, select Rights Bundles and then select the “vmware:scalegroup Entitlement” bundle, and click Publish.

Also ensure provider Add the necessary VMWARE:SCALEGROUP rights to the tenant roles that want to use scale groups.

Tenant Self Serive

On the Tenant portal after login, select Applications and select the Scale Groups tab and Click New Scale Group.

Select an organization VDC in which tenant want to create the scale group.

Or Tenant can also access scale groups from a selected organization virtual data center (VDC) by going in to specific organization VDC

  • Enter a name and a description of the new scale group.
  • Select the minimum and maximum number of VMs to which you want the group to scale and click Next.

Select a VM template for the VMs in the scale group and a storage policy, and click Next. Template needs to be pre-populated in catalog by tenant or can be by provider and published for tenants.

Next step is to select a network for the scale group.If Tenants oVDC is backed by NSX-T Data Center and NSX ALB (AVI) has been published as a load balancer by provider, then tenant can choose NSX ALB load Balancer edge on which load balacing has been enabled and Server pool has been setup by tenant before enabling Scale Groups.

If tenant want to manage the load balancer on your own or if there is no need for a load balancer, thenn select I have a fully set-up network, Auto Scale will automatically adds VM to this network.

VMware Cloud Director starts the initial expansion of the scale group to reach the minimum number of VMs, besically it will start creating a VM from template that tenant selected while creating scale group and it will continue to spin up to minimum number VM spacified in the scale group section.

Add an Auto Scaling Rule

  1. Click Add Rule.
  2. Enter a name for the rule.
  3. Select whether the scale group must expand or shrink when the rule takes effect.
  4. Select the number of VMs by which you want the group to expand or shrink when the rule takes effect.
  5. Enter a cooldown period in minutes after each auto scale in the group.The conditions cannot trigger another scaling until the cooldown period expires and cooldown period resets when any of the rules of the scale group takes effect.
  6. Add a condition that triggers the rule.The duration period is the time for which the condition must be valid to trigger the rule. To trigger the rule, all conditions must be met.
  7. To add another condition, click Add Condition. tenant can add multiple conditions.
  8. Click Add.

From the details view of a scale group, when you select Monitor, you can see all tasks related to this scale group. For example, you can see the time of creation of the scale group, all growing or shrinking tasks for the group, the rules that initiated the tasks, in side Virtual Machine section which VM has been created at what time and what is the IP address of the VM etc…

here is the section showing scale takes which trigered and its status.

Virtual Machine section as said provides scale group VM information anf its details.

here is the another section showing scale takes which trigered and thier status , start time and completion time.

Auto Scale Group in Cloud Director 10.2.2 brings very important functionality nativily to cloud director which does not require any external configuration like vRealize Orchestrator or vRops and does not incure addional cost to tenant or provider. go ahead uprade your cloud director , enable it and let the tenant enjoy this cool functionality.

Getting Started with Tanzu Basic

In the process of modernize your data center to run VMs and containers side by side, Run Kubernetes as part of vSphere with Tanzu Basic. Tanzu Basic embeds Kubernetes in to the vSphere control plane for the best administrative control and user experience. Provision clusters directly from vCenter and run containerized workloads with ease. Tanzu basic is the most affordable and has below components as part of Tanzu Basic:

To Install and configure Tanzu Basic without NSX-T, at high level there are four steps which we need to perform and I will be covering all the steps in three blog posts:

  1. vSphere7 with a cluster with HA and DRS enabled should have been already configured
  2. Installation of VMware HA Proxy Load Balancer – Part1
  3. Tanzu Basic – Enable Workload Management – Part2
  4. Tanzu Basic – Building TKG Cluster – Part3

Deploy VMware HAProxy

There are few topologies to setup Tanzu Basic with vSphere based networking, for this blog we will deploy the HAProxy VM with three virtual NICs, which means there will be one “Management” network , one “Workload” Network and another one will be “frontend” network which will be used by DevOps users and external services will also access HAProxy through virtual IPs on this Frontend network.

NetworkUse
ManagementCommunicating with vCenter and HA Proxy
WorkloadIP assigned to Kubernetes Nodes
Front EndDevOps uses and External Services

For This Blog, I have created three VLAN based Networks with below IP ranges:

NetworkIP RangeVLAN
tkgmgmt192.168.115/24115
Workload192.168.116/24116
Frontend192.168.117/24117

Here is the topology diagram , HAProxy has been configured with three nics and each nic is connected to VLAN that we created above

NOTE– if you want to deep dive on this Networking refer Here , This blog post describe it very nicely and I have used the same networking schema in this Lab deployment.

Deploy VMware HA Proxy

This is not common HA Proxy, it is customized one and its Data Plane API designed to enable Kubernetes workload management with Project Pacific on vSphere 7.VMware HAProxy deployment is very simple, you can directly access/download OVA from Here and follow same procedure as you follow for any other OVA deployment on vCenter, there are few important things which I am covering below:

On Configuration screen , choose “Frontend Network” for three NIC deployment topology

Now Networking section which is heart of the solution, on this we align above created port groups to map to the Management, Workload and Frontend networks.

Management network is on VLAN 115, this is the network where the vSphere with Tanzu Supervisor control plane VMs / nodes are deployed.

Workload network is on vLAN 166, where the Tanzu Kubernetes Cluster VMs / nodes will be deployed.

Front end network which is on vLAN 117, this is where the load balancers (Supervisor API server, TKG API servers, TKG LB Services) are provisioned. Frontend network and workload network must need to route to each other for the successful wcp enablement.

Next page is most important and here we will have VMware HAProxy appliance configuration. Provide a root password and tick/untick for root login option based on your choice. The TLS fields will be automatically generated if left blank.

In the “network config” section, provide network details about the VMware HAProxy for the management network, the workload network and frontend/load balancer network. These all require static IP addresses, in the CIDR format. You will need to specify a CIDR format that matches the subnet mask of your networks.

For Management IP: 192.168.115.5/24 and GW:192.168.115.1

For Workload IP: 192.168.116.5/24 and GW:192.168.116.1

For Frontend IP: 192.168.117.5/25 and GW:192.168.117.1 . this is not optional if you had selected Frontend in “configuration” section.

In Load Balancing section, enter the Load Balancer IP ranges. these IP address will be used as Virtual IPs by the load balancer and these IP will come from Frontend network IP range.

Here I am specifying 192.168.117.32/27 , this segment will give me 30 address for VIPs for Tanzu management plane access and application exposed for external consumption.Ignore “192.168.117.30” in the image back ground.

Enter Data plane API management port. Enter: 5556 and also enter a username and password for the load balancer data plane API

Finally review the summary and click finish. this will deploy VMware HAProxy LB appliance

Once deployment completed, power on the appliance and SSH in to the VM using the management plane IP and check if all the interfaces are having correct IPs:

Also check if you can ping Front end ip ranges and other Ip ranges also. stay tuned for Part2.

Load Balancer as a Service with Cloud Director

Featured

NSX Advance Load Balancer’s (AVI) Intent-based Software Load Balancer provides scalable application delivery across any infrastructure. AVI provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based approach.

With the release of Cloud Director 10.2 , NSX ALB is natively integrated with Cloud Director to provider self service Load Balancing as a Service (LBaaS) where providers can release load balancing functionality to tenants and tenants consume load balancing functionality based on their requirement. In this blog post we will cover how to configure LBaaS.

Here is High Level workflow:

  1. Deploy NSX ALB Controller Cluster
  2. Configure NSX-T Cloud
  3. Discover NSX-T Inventory,Logical Segments, NSGroups (ALB does it automatically)
  4. Discover vCenter Inventory,Hosts, Clusters, Switches (ALB does it automatically)
  5. Upload SE OVA to content library (ALB does it automatically, you just need to specify name of content library)
  6. Register NSX ALB Controller, NSX-T Cloud and Service Engines to Cloud Director and Publish to tenants (Provider Controlled Configuration)
  7. Create Virtual Service,Pools and other settings (Tenant Self Service)
  8. Create/Delete SE VMs & connect to tenant network (ALB/VCD Automatically)

Deploy NSX ALB (AVI) Controller Cluster

The NSX ALB (AVI) Controller provides a single point of control and management for the cloud. The AVI Controller runs on a VM and can be managed using its web interface, CLI, or REST API but in this case Cloud Director.The AVI Controller stores and manages all policies related to services and management. To ensure AVI controllers High Availability we need to deploy 3 AVI Controller nodes to create a 3-node AVI Controller cluster.

Deployment Process is documented Here & Cluster creation Process is Here

Create NSX-T Cloud inside NSX ALB (AVI) Controller

NSX ALB (AVI) Controller which uses APIs to interface with the NSX-T manager and vCenter to discover the infrastructure.here is high level activities to configure NSX-T Cloud in NSX ALB management console:

  1. Configure NSX-T manager IP/URL (One per Cloud)
  2. Provide admin credentials
  3. Select Transport zone (One to One Mapping – One TZ per Cloud)
  4. Select Logical Segment to use as SE Management Network
  5. Configure vCenter server IP/URL (One per Cloud)
  6. Provide Login username and password
  7. Select Content Library to push SE OVA into Content Library

Service Engine Groups & Configuration

Service Engines are created within a group, which contains the definition of how the SEs should be sized, placed, and made highly available. Each cloud will have at least one SE group.

  1. SE Groups contain sizing, scaling, placement and HA properties
  2. A new SE will be created from the SE Group properties
  3. SE Group options will vary based upon the cloud type
  4. An SE is always a member of the group it was created within in this case NSX-T Cloud
  5. Each SE group is an isolation domain
  6. Apps may gracefully migrate, scale, or failover across SEs in the groups

​Service Engine High Availability:

Active/Standby

  1. VS is active on one SE, standby on another
  2. No VS scaleout support
  3. Primarily for default gateway / non-SNAT app support
  4. Fastest failover, but half of SE resources are idle

​Elastic N + M 

  1. All SEs are active
  2. N = number of SEs a new Virtual Service is scaled across
  3. M = the buffer, or number of failures the group can sustain
  4. SE failover decision determined at time of failure
  5. Session replication done after new SE is chosen
  6. Slower failover, less SE resource requirement

Elastic Active / Active 

  1. All SEs are active
  2. Virtual Services must be scaled across at least 2 Service engines
  3. Session info proactively replicated to other scaled service engines
  4. Faster failover, require more SE resources

Cloud Director Configuration

Cloud Director Configuration is two fold, Provider Config and Tenant Config, lets first cover provider Config…

Provider Configuration

Register AVI Controller: Provider administrator login as a admin and register AVI Controller with Cloud Director. provider has option to add multiple AVI controllers.

NOTE – incase if you are registering with NSX ALB’s default self sign certificate and if it throws error while registering , then regenerate self sign certificate in NSX ALB.

Register NSX-T cloud

Now next thing is we need to register NSX-T cloud with Cloud Director, which we had configured in ALB controller:

  1. Selecting one of the registered AVI Controller
  2. Provide a meaning full name to the controller
  3. Select the NSX-T cloud which we had registered in AVI
  4. Click on ADD.

Assign Service Engine groups

Now register service engine groups either “Dedicated” or “Reserved” based on tenant requirement or provider can have both type of groups and assign to tenant based on requirements.

  1. Select NSX-T Cloud which we had registered above
  2. Select the “Reservation Model”
    1. Dedicated Reservation Model:- For each tenant Organization VDC Edge gateway, AVI will create two Service Engine nodes for each LB enabled Org VDC Edge GW.
    2. Shared Reservation Model:- Shared is elastic and shared among all tenants. AVI will create pool of service engines that are going to be shared across tenant. Capacity allocation is managed in VCD, Avi elastically deploys and un-deploys service engines based on usage

Provider Enables and Allocates resources to Tenant

Provider enables LB functionality in the context of Org VCD Edge by following below steps:

  1. Click on Edges 
  2. Choose Edge on which he want to enable load balancing
  3. Go to “Load Balancer” and click on “General Settings”
  4. Click on “Edit”
  5. Toggle on to Activate to activate the load balancer
  6. Select Service Specification

Next step is to assign Service Engines to tenant based on requirement, for that go to Service Engine Group and Click on “ADD” and add one of the SE group which we had registered previously to customer’s one of the Edge.

Provider can restrict usage of Service Engines by configuring:

  1. Maximum Allowed: The maximum number of virtual services the Edge Gateway is allowed to use.
  2. Reserved: The number of guaranteed virtual services available to the Edge Gateway.

Tenant User Self Service Configuration

Pools: Pools maintain the list of servers assigned to them and perform health monitoring, load balancing, persistence.

  1. Inside General Settings some of the key settings are:
    1. Provide Name of the Pool
    2. Load Balancing Algorithm
    3. Default Server Port
    4. Persistence
    5. Health Monitor
  2. Inside Members section:
    1. Add Virtual Machine IP addresses which needs to be load balanced
    2. Define State, Port and Ratio
    3. SSL Settings allow SSL offload and Common Name Check

Virtual Services: A virtual service advertises an IP address and ports to the external world and listens for client traffic. When a virtual service receives traffic, it may be configured to:

  1. Proxy the client’s network connection.
  2. Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.
  3. Forward the client’s request data to the destination pool for load balancing.

Tenant choose Service Engine Group which provider has assigned to tenant, then choose Load Balancer Pool which we created in above step and most important Virtual IP This IP address can be from External IP range of the Org VDC or if you want Internal IP , then you can use any IP.

So in my example, i am running two virtual machines having Org VDC Internal IP addresses and VIP is from external public IP address range, so if I browse VIP , i can reach to web servers sucessfully using VCD/AVI integration.

This completes basic integration and configuration of LBaaS using Cloud Director & NSX Advance Load Balancer. feel free to share feedback.

VMware Cloud Director Two Factor Authentication with VMware Verify

In this post, I will be configuring  two-factor authentication (2FA) for VMware Cloud Director using Workspace ONE Access formally know as VMware Identity Manager (vIDM). Two-factor authentication is a mechanism that checks username and password as usual, but adds an additional security control before users are authenticated. It is a particular deployment of a more generic approach known as Multi-Factor Authentication (MFA).Throughout this post, I will be configuring VMware Verify as that second authentication.

What is VMware Verify ?

VMware Verify is built in to Workspace ONE Access (vIDM) at no additional cost, providing a 2FA solution for applications.VMware Verify can be set as a requirement on a per app basis for web or virtual apps on the Workspace ONE launcher OR to login to Workspace ONE to view your launcher in the first place. The VMware Verify app is currently available on iOS and Android.VMware Verify supports 3 methods of authentication:

  1. OneTouch approval
  2. One-time passcode via VMware Verify app (soft token)
  3. One-time passcode over SMS

By using VMware Verify, security is increased since a successful authentication does not depend only on something users know (their passwords) but also on something users have (their mobile phones), and for a successful break-in, attackers would need to steal both things from compromised users.

1. Configure VMware Verify

First you need to download and install “VMware Workspace ONE Access“, which is very simple to deploy using ova. VMware Verify is provided as-a-service, and thus, it does not require to install anything on-premise server. To enable VMware Verify, you must contact VMware support. They will provide you a security token which is all you need to enable the integration with VMware Workspace One Access (vIDM).Once you get the token, login into vIDM as an admin user and go to:

  1. Click on the Identity & Access Management tab
  2. Click on the Manage button
  3. Select Authentication Methods
  4. Click on the configure icon (pencil) next to VMware Verify
    1. undefined
  5. A new window will pop-up, on which you need to select the Enable VMware Verify checkbox, enter the security token provided by VMware support, and click on Save.
    1. undefined
    2. undefined

2. Create a Local Directory on VMware Workspace One Access

VMware Workspace ONE Access not only supports Active Directories , LDAP Directories but also supports other types of directories, such as local directories and Just-in-Time directories.For this Lab , i am going to create a local directory using local directory feature of Workspace One Access,Local users are added to a local directory on the service. we need to manage the local user attribute mapping and password policies. You can create local groups to manage resource entitlements for users.

  1. Select the Directories tab
  2. Click on “Add Directory”
    1. undefined
  3. Specify Directory and domain name (this is same domain name i have registered for VMware Verify
    1. undefined

3. Create/Configure a built-in Identity Provider

Once the second authentication factor is enabled as described on steps 1 and 2, it must next be added as an authentication method to a Workspace one access built-in provider. If in your environment already exists one, you can re-configure it. Alternatively, you can create a new built-in identity provider as explained below.Login to Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Identity Providers link
  4. Click on the Add Identity Provider button and select Create Built-in IDP
    1. undefined
  5. Enter a name describing the Identity Provider (IdP)
  6. Which users can authenticate using the IdP – In the example below I am selecting the local directory that i had created above.
  7. Network ranges from which users will be directed to the authentication mechanism described on the IdP
  8. The authentication methods to associate with this IdP – Here I am selecting VMware Verify as well as Local Directory.
  9. Finally click on the Add button
    1. undefined

4. Update Access Policies on Workspace One Access

The last configuration step on Workspace One Access (vIDM) is to update the default access policy to include the second factor authentication mechanism. For that, login into Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Policies link
  4. Click on the Edit Default Policy button
  5. This will open up new page showing the details of the default access policy. Go to “Configuration” and Click on “ALL RANGES”.

A new window will pop-up. Modify the settings right below the line “then the user may authenticate using:”

  1. Select Password as the first authentication method – This way users will have to enter their ID and password as defined on the configured Local Directory
  2. and Select second authentication mechanism. here I am adding VMware Verify – This will make that after a successful password authentication, users will get a notification on their mobile phones to accept or deny the login request.
  3. I am leaving empty the line “If preceding Authentication Method fails or is not applicable, then:” – This is because I don’t want to configure any fallback authentication mechanism or you can choose based on your choice.

5. Download the app in your mobile and register a user from an Cloud Director Organization

  1. Access the app provider on your mobile phone. Search for VMware Verify and download it.
    1. undefined
  2. Once it is downloaded, open the application. It will ask for your mobile number and e-mail address. Enter your domain details. On the screenshot below, I’m providing my mobile number and an e-mail which is only valid in my lab.After clicking OK, you will be provided two options for verifying your identity:
    1. undefined
  1. Receiving and SMS message – SMS will have registration code which will allow you to enter in to APP along with registration code.
    1. undefined
  2. Receiving a Phone Call – after clicking on this option, the app will show a registration code you will need to type on the phone pad once you receive the call
  3. Since i am using SMS way to doing it , it will ask you to Enter the code which you have received in SMS Manually (XopRcVjd4u2)
    1. undefinedundefined
  4. Once your identity has been verified, you will be asked to protect the app by setting a PIN number. After that, the app will show there are not accounts configured yet.
    1. undefinedundefined
  5. Click on Account and add the account
    1. undefinedundefined

Immediately after that, we will start receiving tokens on the VMware Verify mobile app. so at this moment, you are ready to move to the next step.

6. Enable VMware Cloud Director Federation with VMware Workspace ONE Access

There are three authentication methods that are supported by vCloud Director:

Local: local users which are created at the time of installing vCD or while creating any new organization.

LDAP service:  LDAP service enables the organisations to use their own LDAP servers for authentication. Users can then be imported into vCD from the configured LDAP.

SAML Identity Provider: A SAML Identity Provider can be used to authenticate users in organisations. SAML v2.0 metadata is required for the service to be configured. The metadata must include the location of the single sign-on service, the single logout service, and the X.509 certificate for the service. In this post we will be using federation between VMware Workspace One Access with VMware Cloud Director.

So, let’s go ahead and login to VMware Cloud Director Organization and go to “Administration” and Click on “SAML”

  1. Enable Federation by setting “Entity ID” to any other unique string , in this case i am setting “org name” , in my case my org name is “abc”
  2. Then click on “Generate” to generate a new certificate and click “SAVE”
    1. undefined
  3. Download Metadata from the link , It will download file “spring_saml_metadata.xml“. This activity can be performed by system or Org Administrator.
    1. undefined
  4. In VMware Workspace ONE Access(VIDM) admin console, go to “Catalog” and create new web application.
    1. Write application name, description and upload nice icon and choose category.
    1. undefined
  5. In the next screen keep Authentication Type SAML 2.0 and paste the xml metadata downloaded in step #1 into the URL/XML window. Scroll down to Advanced Properties. 
    1. undefined 
  6. In Advanced Properties we will keep the defaults but add Custom Attribute Mappings which describe how VIDM user attributes will translate to VCD user attributes. Here is the list:
    1. undefined
  7. Now we can finish the wizard by clicking next, select access policy (keep default) and reviewing the Summary on the next screen.
    1. undefined
  8. Next we need to retrieve metadata configuration of VIDM – this is by going back to Catalog and clicking on Settings. From SAML Metadata download Identity Provider (IdP) metadata.
    1. undefined
  9. Now we can finalize SAML configuration in vCloud Director. on Federation page Toggle Use SAML Identity Provider button to enable it and import the downloaded metadata (idp.xml) with Browse and Upload buttons and click Apply.
    1. undefined
  10.  we first need to import some users/groups to be able to use SAML. You can import VMware Workspace ONE Access(VIDM) users by their user name or group. We can also assign role to the imported user.
    1. undefined

This completes the federation process between VMware Workspace ONE Access (VIDM) and VMware Cloud Director. For More details you can refer This Blog Post.

Result – Cloud Director Two Factor Authentication in Action

Lets your tenant go to browser and browse their tenant URL, they will get atomically redirected to VMware Workspace ONE Access page for authentication:

  1. User enters user name and password and if user get successfully authenticated , if moves to 2FA
  2. on the next step, user gets a notification on thier mobile phones
    1. undefined
  3. Once user approves the authentication on the phone , VMware Workspace ONE Access allows access to user based on the role given on VMware Cloud Director.

On-Board a New User

  1. Create a new User in VMware Workspace and also add him to application access.
    1. undefined
    2. undefined
  2. User gets an email to setup his/her password, user must configure his/her password.
    1. undefined
  3. Administrator login to Cloud Director and Import newly created user from SAML with a Cloud Director role
    1. undefined
    2. undefined
  4. User browses cloud URL and after user logs in to portal with user id and password, he/she asked to provide mobile number for second factor authentication.
    1. undefinedundefined
  5. After entering mobile number , if user has installed “VMware Verify” app , he/she get notification for Approve/Deny or if app has not been installed , click on “Sign in with SMS” , user will receive an SMS , enter that SMS for second factor authentication.
    1. undefinedundefinedundefined
  6. Once user enters the passcode received on his/her cell phone, VMware Workspace One Access allow user to login to cloud director.
    1. undefined

This completes the installation and configuration of VMware Verify with VMware Cloud Director. you can add additional things like branding of your cloud etc.. which will give this your cloud identity.

Ingress on Cloud Director Container Service Extension

In this blog post i will be deploying Ingress controller  along with Load Balancer (LB was deployed in previous post) in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

11

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Deployment Options for provider specific details)
  • kubectl configured with admin access to your cluster
  • RBAC must be enabled on your cluster

Install Contour

To install Contour, Run:

  • #$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  • 1

This command creates:

  • A new namespace projectcontour
  • A Kubernetes Daemonset running Envoy on each node in the cluster listening on host ports 80/443
  • Now we need to retrieve the external address of the load balancer assigned to Contour by our Load Balancer that we deployed in previous port. to get the LB IP run this command:
    • 2
  • “External IP” is of the range to IP addresses that we had given in LB config , we will NAT this IP on VDC Edge gateway to access this from outside or internet.
    • 3

Deploy an Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is hosted at Github , can be downloaded from Here. Once downloaded:

  • Create the coffee and the tea deployments and services using
  • Create a secret with an SSL certificate and a key
  • Create an Ingress resource

This completes the deployment of the application.

Test the Application

To access the application, browse the coffee and the tea services from your desktop which has access to service network. you will also need to add hostname/ip in to /etc/hosts file or your DNS server

  • To get Coffee:
    • 8
  • If your prefer Tea:
    • 10

This completes the installation and configuration of Ingress on VMware Cloud Director Container Service Extension, Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here.

Load Balancer for Cloud Director Container Service Extension

In this blog post i will be deploying Load Balancer in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is LB in Kubernetes ?

To understand load balancing on Kubernetes, we must need to understand some Kubernetes basics:

  • A “pod” in Kubernetes is a set of containers that are related in terms of their function, and a “service” is a set of related pods that have the same set of functions. This level of abstraction insulates the client from the containers themselves. Pods can be created and destroyed by Kubernetes automatically, Since every new pod is assigned a new IP address, IP addresses for pods are not stable; therefore, direct communication between pods is not generally possible. However, services have their own IP addresses which are relatively stable; thus, a request from an external resource is made to a service rather than a pod, and the service then dispatches the request to an available pod.

An external load balancer will apply logic that ensures the optimal distribution of these requests. In order to create Load Balancer, your clusters must be deployed in to cloud director based cloud and follow below steps to configure load balancer for your kubernetes cluster:

                                                 18

METALLB Load Balancer

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters. for more information on this, refer here

Pre-requisite

MetalLB requires the following pre-requisite to function:

  • A CSE Kubernetes cluster, running Kubernetes 1.13.0 or later.
  • A cluster network configuration that can coexist with MetalLB.
  • Some IPv4 addresses for MetalLB to hand out.
  • Here is my CSE cluster info , this cluster i will be using for this Demo:
    • 1

Metallb Load Balancer Deployment

Metallb deployment is very simple three step process, follow below steps:

  • Create a new namespace as below:
    • #kubectl create ns metallb-system
  • Below command will deploy MetalLB to your cluster, under the metallb-system namespace.
    • #kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
    • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
    • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
    • 3
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
    • #kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
    • 4

NOTE– I am accessing my CSE cluster using NAT , thats the reason i am using “–insecure-skip-tls-verify”

Metallb LB Layer2 Configuration

The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The specific configuration depends on the protocol you want to use to announce service IPs. Layer 2 mode is the simplest to configure and in many cases, you don’t need any protocol-specific configuration, only IP addresses.

  • Following configuration map gives MetalLB control over IPs from 192.168.98.220 to 192.168.98.250, and configures Layer 2 mode:
    • apiVersion: v1
      kind: ConfigMap
      metadata:
        namespace: metallb-system
        name: config
      data:
        config: |
          address-pools:
          - name: default
            protocol: layer2
            addresses:
            - 192.168.98.220-192.168.98.250

This completes installation and configuration of load balancer, let’s go ahead and publish the application using kubernetes “type=LoadBalancer”.  cse and metallb will take everything.

Deploy an Application

Before deploying application, i want to show the my cloud director network topology where these container workload is getting deployed and kubernetes services are getting created.So here we have one Red segment (192.168.98.0/24)  for container workload where CSE has deployed kubernetes worker nodes and on same network we have deployed our “Metallb” load balancer.

Kubernetes pods will be created on weave networks , which is internal software defined networking for CSE and services will be exposed using Load Balancer which is configured with “OrgVDC” network.

17

let’s get started ,so we are going to use “guestbook” application, which uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. let’s go ahead and deploy it:

NOTE – All above steps in detail has been covered in Kubernetes.io website – here is Link

Accessing Application

To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:

  • #kubectl get services
    • 14
  • Go back to your organization VDC and create a NAT rule, so that service can be access using routable/public IP.
    • 16
  • Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
    • 15

Very easy and simple process to deploy and access your containerised applications which  are running on most secure and easy VMware Cloud Director. go ahead and start running containerised apps with upstream kubernetes on Cloud Director.

Stay tuned!!! , in next post i will be deploying Ingress on CSE cluster.

Install and Configure Cloud Director App Launchpad

In continuation of my last post on the same topic, in this post we will deploy and configure Cloud Director App Launchpad.

With App Launchpad VMware cloud providers can now deliver their own catalog based applications or VMware Cloud Marketplace certified 3rd party Cloud Applications, and Bitnami catalog applications directly to customers through a simple catalog interface from a VMware Cloud Director plugin. This capability allows Cloud Providers to deliver application Platform as a Service to customers who needn’t know anything about the supporting infrastructure for the catalog applications they deploy.

NOTE – In this release Tenant using App Launchpad 1.0, can launch single-VM applications.

Prerequisites for App Launchpad Installation

Before we install and configure App Launchpad, it requires few external components and supports specific versions that you must deploy and configure.

  • Create a new Virtual Machine with below requirement
    • 14
  • Ensure Rabbit MQ is installed and configured under Cloud Director extensibility before deploying App Launchpad.
  • Inside same Rabbit MQ Server create a new Exchange with type as “direct” and a dedicated AMQP user that has full permissions to the virtual host of the AMQP broker.

    • This slideshow requires JavaScript.

Install Cloud Director App Launchpad

Deployment of  App Launchpad can be done by installing an RPM package on a dedicated Linux virtual machine.Download Application Launchpad from here and transfer the file to ALP server and installation is very simple process:

  • Open an SSH connection to the installation target Linux virtual machine and log in by using a user account with sufficient privileges to install an RPM package.
  • Install the RPM package by running the installation command.
    • yum install -y vmware-vcd-alp-1.0.0-1593616.x86_64.rpm
    • 16

Connect App Launchpad with Cloud Director

To configure App Launchpad with Cloud Director, we will use the alp command line utility. By using this utility:

  • We will establish a connection between App Launchpad and VMware Cloud Director
  • Define or create the App-Launchpad-Service account
  • and install the App Launchpad user interface plug-in for VMware Cloud Director.
  • The alp connect command also configures App Launchpad with your AMQP broker.
  • #alp connect --sa-user alpadmin --sa-pass <PASSWORD> --url https://10.96.98.50 --admin-user admin@system --admin-pass <PASSWORD> --amqp-user alp --amqp-pass <PASSWORD> force --amqp-exchange alpext
  • Accept “EULA” and “certificate”
    • 17
  • if you have put correct information then it should show successfully configured
  • Restart ALP service using
    • #systemctl restart alp
  • You can run #alp show to verify the connection
    • 18

Configure App Launchpad

  • Now you can go to Cloud Director and check installed ALP plugin in Cloud Director.
    • 5
  • Click on “LAUNCH SETUP” to configure it to offer Applications as a Service
  • If you want to configure the infrastructure for App Launchpad automatically, select Yes and software will setup everything automatically.
    • n14
  • In case you chosen “No i will set it up on my own”, pre-requisite you need to setup manually.
    • n15
  • App Launchpad supports the use of applications from the Bitnami applications catalog that is available in the VMware Cloud Marketplace.

    You can also create catalogs of your custom, in-house applications and configure App Launchpad to work with these catalogs.

  • Create sizing templates for the applications.
    1. Enter a name for the sizing template.
    2. Enter a vCPU count, a memory size (in GB), and a disk size (in GB)
  • To complete the initial configuration of App Launchpad, click Finish.
    • This slideshow requires JavaScript.

  • If everything is goes fine and have enough resources in Cloud Director , you will see “App Launchpad Setup Complete”
    •  19

Onboarding Bitnami Applications

VMware Cloud providers can import applications from the Bitnami applications catalog that is available in the VMware Cloud Marketplace. To begin, provider must log in to the VMware Cloud Marketplace and subscribe to the Bitnami application you wish to deploy. Follow these steps:

  • Log in to the VMware Cloud Marketplace.
  • From the “Catalog” page, find the Bitnami application you wish to deploy (With App Launchpad 1.0, tenant users can only run single-VM applications) and select it to subscribe.
  • On the “Settings” page, choose “VCD” as the platform and select the correct version. Set the subscription type to “BYOL”. Click “Next” to proceed.

    • This slideshow requires JavaScript.

  • The subscription will now be added to your Cloud Director App Launchpad organization , which tenants can use it.
  • Make sure the “App Launchpad” organization has right permission.
    • n16

Onboarding In-house Applications

Cloud Provider can also add your own in-house applications to the content library of the “AppLaunchpad" provider organization and upload your applications manually , to do so

  • Provider admin need to Navigate to the “Content Libraries -> vApp Templates” page and click on “NEW”..
    • t2
  • By default, in-house applications neither has logo nor has summary.
    • t3

To give these apps better user experience, service provider can set metadata on vApp templates by GUI or vCloud API , here is GUI Way to do so:

  • Go to Content Library and click on application which you have recently updated and go to metadata and click on “Edit” and add following items:
    • t4
    • title – Title of Application
    • summary – Summary of Application which will be displayed on Application tile.
    • Description – Description of Application
    • version – Displays version number of Application.
    • logo – Provider can choose a logo using Internal/External web location like S3.
    • screenshot – Provider can choose default snapshot using Internal HTTP/HTTPs server or External web location like S3
  • t5

This completes the installation and configuration of Cloud Director App Launchpad and as i said in my last post – App Launchpad is a free component for VMware Cloud Director, and doesn’t necessitate the use of Bitnami catalogs, providers can use their own appliances, so go ahead and give it a try, start delivering a PAAS like solutions to your Tenants.

 

 

 

 

VMware Cloud Director Encryption -PartIII

In the Part-1 & Part-2 we configured HyTrust KeyControl Cluster & vCenter, In this post we will configure Cloud director to utilize what we have configured till now…

Attach Storage Policy to Provider VDC

To update the information in the vCloud Director database about the VM storage policies which we had created in underlying vSphere environment, we must refresh the storage policies of the vCenter Server instance.

  • Login to Cloud Director with cloud admin account and go to vSphere resources and choose vCenter on which we had created policies and click on “REFRESH POLICIES”
    • 7
  • You can add a VM storage policy to a provider virtual data center, after which you can configure organization virtual data centers backed by this provider virtual data center to support the added storage policy.
    • Login to Cloud Director, go to Provider VDCs and choose PVDC which is backed by the cluster where we had created storage policies.
    • Click on “ADD”
    • 8
  • Choose the Policy that we created in previous post.
    • 9
    • 10

Attach Storage Policy to Organization VDC

You can configure an organization virtual data center to support a VM storage policy that you previously added to the backing provider virtual data center.

  • Now click on Organization VDCs, and click the name of the target organization virtual data center like 
    • 11
  • Click the Storage tab, and click Add.
    • 12
  • You can see a list of the available additional storage polices in the source provider virtual data center
  • Select the check boxes of one or more storage policies that you want to add, and click Add.
    • 13

Self Services Tenant Consumption

When Provider’s tenant try to create a VM/vAPP (A virtual machine can exist as a standalone machine or it can exist within a vApp) , he can use the encryption policy that we have created previously.

  • This is new VM creation wizard from template , Tenant user must choose “use custom storage policy” and select the “encryption policy”
    • 14
  • Once VM is provisioned , user can go and check the Storage policy by clicking on VM.
    • 15
  • User can also go in to “Hard Disk” section of VM and check disk policy.
    • 16

Encrypt Named Disks

Named disks are standalone virtual disks that you create in Organization VDCs.When you create a named disk, it is associated with an Organization VDC but not with a virtual machine. After you create the disk in a VDC, the disk owner or an administrator can attach it to any virtual machine deployed in the VDC. The disk owner can also modify the disk properties, detach it from a virtual machine, and remove it from the VDC. System administrators and organization administrators have the same rights to use and modify the disk as the disk owner.

  • Here we will create a new encrypted “Named Disk” by choosing storage policy as “Encryption Policy”.
    • 17
  • Cloud Director allow users to connect these named disks
  • Click the radio button next to the name of the named disk that you want to attach to a virtual machine, and click Attach
  • From the drop-down menu, select a virtual machine to which to attach the named disk, and click Apply.
    • 1819

This competes three part Cloud Director encryption configuration and use by the tenants , this features enables VMware Cloud Providers new offering and monetisation opportunities, go ahead , deploy and start offering additional/deferential services.

VMware Cloud Director Encryption -PartII

In the Part-1 we configured HyTrust KeyControl Cluster , In this post we will configure this cluster in vCenter and configure encryption for Virtual Machines. Let’s create a certificate on KMS server, which we will use to authenticate with vCenter.

Create Certificate

To create certificate , login to KMS server and go to KMIP

  • Click on “Client Certificates”
  • Then click on Actions and “Create Certificates”
    • 20
  • Enter the required details for creating certificate and click on create.
    • 21

Configure KMS with vCenter

  • Highlight the newly created certificate, click the Actions dropdown button, then click the Download Certificate option. This will download the certificate created above. A zip file containing the Certificate of Authority (CA) and certificate will be downloaded.
    • 22
  • Once you have downloaded Certificate , Log in to the VCSA, highlight the vCenter on the left hand pane, click on the configure tab on the right hand pane, click on Key Management Servers, then click the Add KMS button.
    • 23
  • Enter a Cluster name, Server Alias, Fully Qualified Domain Name (FQDN)/IP of the server, and the port number. Leave the other fields as the default, then click OK.
    • 26

Enable Trust between vCenter and KMS

  • Now we have to establish the trust relationship between vCenter and HyTrust KeyControl. Highlight the KeyControl appliance and click on Establish trust with KMS.
    • 27
  • Select the Upload certificate and private key option, then click OK.
    • 28
  • Click on Upload file button , browse to where the CA file was previously generated, select the “vcenter name”.pem file, then click Open.
  • Repeat the process for the private key by clicking on the second Upload file button and Verify that both fields are populated with the same file, then click OK.
    • 29
  • You will now see that the Connection status is shown as Normal indicating that trust has been established. Hytrust KeyControl is now set up as the Key Management Sever (KMS) for vCenter.
    • 25
  • Now we successfully add one Node of cluster , add another node by following the same steps..

Create Tag Category, Tag & Attach to Datastore

 Now we need to “Tag” few data stores which will hold these encrypted VMs , please create a “Tag Category” and a “Tag” in the vcenter and tag the data stores with this “Tag”.

This slideshow requires JavaScript.

Create Storage Profile

  • log into vCenter > Home > Policies and Profiles > VM Storage Policies > Create VM Storage Policy > Give it a name > Next
  • Select “Enable host based rules” and select “Enable tag based placement rules”
  • Select “Storage Policy Component” and choose “Default Encryption Properties”.The default properties are appropriate in most cases. You need a custom policy only if you want to combine encryption with other features such as caching or replication.
  • Select “Tag Category” and choose Appropriate Tag.
  • View Data Stores,review the configuration and finish.

This slideshow requires JavaScript.

This completes vCenter Configuration, in the next post will be configuring cloud director to consume these policies and tenant will use these policies.

 

VMware Cloud Director Encryption- Part1

Latest Cloud Director 10.1 release adds support for VM Encryption using cloud director self service portal, this means it allow users to encrypt/decrypt VMs and disks via Cloud Director, view the encryption status of VMs and disks in the API as well as user interface. Some of the key features are:

  • Ability to encrypt VMs at rest through Cloud Director UI and API
  • Cloud Providers configure Key Management Service (KMS), and encryption policy in backend vSphere
  • Cloud Providers can choose to make VM encryption available for some or all tenant
  • Tenant users can choose to apply encryption policy to VMs or individual disks.
  • In case of Tenant Managed Dedicated vCenter then Tenant can manages Keys and VM Encryption

I am going to write three part blog posts , which will cover:

  • VMware Cloud Director Encryption – PartI
  • VMware Cloud Director Encryption – PartII
  • VMware Cloud Director Encryption – PartIII

Deploy KMS

With HyTrust KeyControl supports a fully functional KMIP server that can be deployed as a vSphere Key Management Server and once deployment is completed and a trusted connection between KeyControl and vSphere has been established, KeyControl can manage the encryption keys for virtual machines in the cluster that have been encrypted with vCenter Server for vSphere Virtual Machine Encryption or VMware VSAN Encryption.

In this post we will deploy HyTrust KeyControl KMS server and setup KMS Cluster..There are two methods for installation of Key Control… either we can use OVA appliance or another Method to use ISO. in this Post we will use OVA method..

  • Open your vSphere Web Client and Click on “Deploy OVF Template”.
    • 1.png
  • Choose OVF
    • 2
  • Provide Name for the HyTrust KeyControl Appliance, select a deployment location, then click Next.
    • 3.png
  • Select the vSphere cluster or host Where you would like to install the HyTrust KeyControl appliance on, then click Next.
    • 4.png
  • Review the details, then click Next.
    • 5.png
  • Select the proper configuration from the drop down menu, then click Next. ( i am using Demo as resources are less in my Lab)
    • 6.png
  • Select the preferred storage and disk format for the KeyControl appliance, then click Next.
    • 7.png
  • Select the appropriate network, enter appropriate network details then click Next.
    • 9.png
  • Review the summary screen, if everything is correct, click Finish.
    • 10.png
    • 11

Appliance deployment is successfully completed.. since i am going to setup a cluster , so i would go ahead and deploy another appliance using the same procedure

Configure KMS Cluster

Once Both the appliance has been deployed ,

  • Power on the newly created HyTrust KeyControl appliance.then open a console to the KeyControl appliance. Set the system password, then press OK.
    • 12.png
  • Since this is the First Node ,Select No, then press enter.
    • 13.png
  • Review the Appliance Configuration, then press OK.
    • 14.png
  • Now First KeyControl appliance is configured and you can now move to the KeyControl WebGUI. Open a web browser and navigate to the IP or FQDN of the KeyControl appliance. Use the the following credentials to initially log in:
    • Username: secroot
      Password: secroot
    • 15.png
  • After login , read and accept the EULA by clicking on I Agree at the bottom of the agreement.
    • 16.png
  • Enter a new password for the secroot account, then click Update Password.
    • 17
  • Now we successful setup our first Node..
    • 18
  • Power on the second appliance and follow all the steps as above , except.. Click “YES” Here.
    • 19.png
  • This will take us to the process of Cluster creation process..
    • 20.png
  • Enter the IP address of First Node.
    • 21.png
  • The final piece of information required is the passphrase. We would require a minimum of 16 characters.
    • 25.png
  • The node must now be authenticated through the webGUI, as the following message indicates:
    • 23
  • At this point you need to log on to the webGUI console of First Node with Administration privileges. The new KeyControl node will automatically appear as an unauthenticated node in the KeyControl cluster, as shown below:
    • 26.png
  • To authenticate this new node, click the Actions Button and then click Authenticate. This will take you to the authentication screen shown below. You are prompted to enter the Authentication Passphrase.
    • 2728.png
  • On the new KeyControl’s console, you will see a succession of status messages, as shown below:
    • 30.png
  • Once authentication completes, the KeyControl node is listed as Authenticated but Unreachable until cluster synchronisation completes and the cluster is ready for use. This should not take more than a minute or two. Then it will show as Authenticated and Online.Once the KeyControl node is available, the status will automatically move to Online and the cluster status at the top right of the screen will change back to Healthy.
    • 31.png

At this point, the new cluster/node is ready to use.

Enable KMS Service

  • Now Click on the KMIP button on the toolbar to configure the KMIP.
    • 32.png
  • Enable KMIP by changing the state from disabled to enabled, then click save, then click Apply.
    • NOTE: Take note of the port number 5696 and have it handy. You will specify this port number in the vCenter\VCSA configuration, later on.
      33.png
  • Now we have successfully setup KMS Cluster.
    • 34.png

This completes process of KMS server installation , their configuration and KMS cluster creation and configuration. In the next post , we will use this cluster for vSphere to use as KMS server.

Deliver Applications as a Service on Cloud Director with App Launchpad

Cloud Director App Launchpad helps VMware Cloud Providers to offer their tenants a curated portfolio of applications for their consumption, without them having to know anything about VMware Cloud Director based? with the release of App Launchpad Cloud providers can elevate their portfolio from IAAS,CAAS to Applications as a Service.Cloud Provider can Offer in-house applications suited to verticals or solution areas, App Launchpad will help providers in offering all this  and also making it very easy for all customer personas like DevOps, Developers, IT admins to access and deploy applications to VMware Cloud Director

App Launchpad For Providers

VMware Cloud Providers can  configure App Launchpad to work with the following types of applications:

  • Bitnami Applications

    • Bitnami offers pre-configured, tested, and supported open source applications. Service Providers that subscribe to the Bitnami Community Catalog can access these applications from the VMware Cloud Marketplace also.
    • n8
  • Apps from VMware Cloud Marketplace

    • VMware Cloud Marketplace is a service that will allow our partners to easily publish solutions in a variety of formats – whether it’s containers or appliances or even SaaS, it  offers a range of ISV applications that a Service Provider can add to VMware Cloud Director and make available for consumption to their tenants using App Launchpad.
    • n9
  • In-house applications

    • Service provider can upload their own in house developed application vApps and can make it available to their tenants to consume with in seconds. providers will upload their own solution in AppLaunch Pad catalog and using an API call update catalogs description and logo and make it available to their choice of tenants using App LaunchPad.

Provider Onboarding Cloud Market Place Applications

To onboard Bitnami applications to App Launchpad, Cloud providers will have go to VMware Cloud Market Place and import the applications to the newly created Launchpad provider organization.

This slideshow requires JavaScript.

Provider Onboarding In-House Applications

To onboard custom, in-house applications, Providers will have to go to the content library of the newly created Launchpad provider organization and create catalog items and upload their applications. Provider will also need to add logo and description to these applications using an API, please refer code.vmware.com for API calls.

Provider Tenant Access & Application Management

To make applications available to tenants, App Launchpad automatically creates a catalog of applications and publish it to a VMware Cloud Director organization. Provider can also configure the default application deployment settings at the VMware Cloud Director organization and organization virtual data center levels.

Using App Launchpad, Providers can control the visibility of application catalogs to tenant users , can define various T-Shirt sizes for the application, control the visibility of catalogs.

Provider can remove an application catalog from a VMware Cloud Director organization, the users in the organization can no longer use the applications in the catalog.

This slideshow requires JavaScript.

App Launchpad For Tenants

Using App Launchpad, Various Tenant personas like developers, IT admins , End users and DevOps engineers can launch applications to their organization virtual data center  in few seconds and start consuming immediately. Few of the key features are:

  • Curated catalog of applications for tenants
    • n3
  • 1-Click app Deployment for tenant users
    • n6
  • Automates VM creation, networking, firewalling, assigns IP , Tenant user does not need to worry about underlying infrastructure required to provision and access apps, just need to go to App LaunchPad – My Applications and here they can get access information and basic operations like:
    • Open Console
    • IP address
    • Actions like – power on/off , delete etc..
    • n7
  • For tenant consumers No knowledge of underlying infrastructure required to provision and access apps.

How to Start ?

It is very simple and easy process to install App launchpad , here is App Launch Pad Installation Pre-requisite:

n2

NOTE – Linux Based Operating Cent OS 7 & Cent OS 8 are only Supported as of now.

For Detailed installation steps , please refer App Launchpad documentation here, i will also write few more posts on this topic.

App Launchpad is a free component for VMware Cloud Director, and doesn’t necessitate the use of Bitnami catalogs, providers can use their own appliances, so go ahead and give it a try, start delivering a PAAS solutions to your customers.