VMware Cloud Director OIDC Integration with VMware Workspace ONE Access

Featured

VMware Cloud Director OIDC Integration with VMware Workspace ONE Access

Prerequisite

  • VMware Workspace access ONE must be already deployed.
  • VMware workspace access ONE must be configured with a directory service source for users and groups.
  • Cloud Director must be installed and configured for provider and tenant organizations.

Bill of Material

  • VMware Cloud Director 10.5.1
  • VMware Workspace ONE Access 23.09.00

Steps to Configure Workspace ONE Access for OIDC Authentication

Workspace ONE Access uses OAuth 2 to enable applications to register with Workspace ONE Access and create secure delegated access to applications. In this case, we will use Cloud Director to integrate with Workspace One Access.

  • In the Workspace ONE Access console Settings > OAuth 2.0 Management page, click ADD CLIENT.
  • In the Add Client page, configure the following.
  • Click SAVE. The client page is refreshed and the Client ID and the hidden Shared Secret are displayed.
  • Copy and save the client ID and generated shared secret.
  • Note: If the shared secret is not saved or you lose the secret code, you must generate a new secret, and update in Cloud Director that uses the same shared secret with the regenerated secret. To regenerate a secret, click the client ID that requires a new secret from the OAuth 2.0 Management page and click REGENERATE SECRET.

Steps to configure VMware Cloud Director to use Workspace ONE Access for Provider/Tenant users and groups

  • From the top navigation bar, select Administration.
  • In the left panel, under Identity Providers, click OIDC or directly you can browse: https:// [VCD Endpoint]/(provider or tenant/[orgname])/administration/identity-providers/oidcSettings
  • If you are configuring OIDC for the first time, copy the client configuration redirect URI and use it to create a client application registration with an identity provider that complies with the OpenID Connect standard, for example, VMware Workspace ONE Access. (this has already been done above)
  • Click Configure
  • Verify that OpenID Connect is active and fill in the Client ID and Client Secret you created in VMware Workspace ONE Access as above during client creation.
  • To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL at the site of the provider that VMware Cloud Director can use to send authentication requests to. Fill in the IDP Well-known Configuration Endpoint field with the value:               https://ws01 URL/SAAS/auth/.well-known/openid-configuration
  • Click next.
  • If everything is correctly configured, the below information will automatically get populated, keep a note we are using the User Info endpoint.
  • VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information, enter the scope information, and click Next.
  • Since we are using User Info as an access type, map the claims as below and click Next.

NOTE: At the claims mapping step, the Subject theme will be default populated with “sub” which will mean that VCD users will have the username format “[username]@XXX”. If you want to import the users to VCD with a different format, you can change the Subject theme to map to “email” and then import users to VCD using the email address attached to the account. 

This is the most critical piece of configuration. Mapping this information is essential for VCD to interpret the token/user information correctly during the login process.

Login as an OIDC GROUP Member User

  1. In the Provider/Tenant organization’s Administration Page, import OIDC groups and map them to existing VCD roles.
  2. NOTE: In case you don’t see the “IMPORT GROUPS” button, refresh the page, and you will see the desired button IMPORT GROUPS
  • User go to https:// [VCD Endpoint]/(provider or tenant/[orgname])
  • The user should be redirected to the Workspace ONE Access login page. Users can log in with the user in the group.
  • The user will be redirected back to VCD and should now be fully logged in. 

After the first successful login, the organization administrator can see the newly auto-imported user.

Login as an OIDC User

  • In the Provider/Tenant organization’s Administration Page, import OIDC users and map them to existing VCD roles.
  • User go to https://%5BVCD Endpoint%5D/(provider or tenant/[orgname])
  • The user should be redirected to the Workspace ONE Access login page and log in there.
  • The user will be redirected back to VCD and should now be fully logged in. 

If you get the SSO Failure page double-check that you imported to the correct group/user and that the username format is correct. For additional information, you can check Here and for troubleshooting and about configuring additional logging, you can check the official documentation here.

Login without OIDC or as a Local User

In version 10.5, if an organization in VMware Cloud Director has SAML or OIDC configured, the UI displays only the Sign in with Single Sign-On  option. To log in as a local user, navigate to https://vcloud.example.com/tenant/tenant_name/login or https://vcloud.example.com/provider/login.

NSX Multi-Tenancy in VMware Cloud Director

Featured

Multi-Tenancy was introduced in NSX UI starting from VMware NSX 4.1 and now commencing with version 10.5.1, VMware Cloud Director introduces support for NSX multi-tenancy, facilitating direct alignment of vcd organizations with NSX projects.

What are NSX Projects ?

A project in NSX functions akin to a tenant. Creating projects enables the separation of security and networking configurations among different tenants within a single NSX setup.

Multi-tenancy in NSX is achieved by creating NSX projects, where each project represents a logical container of network and security resources (a tenant). Each project can have its set of users, assigned privileges, and quotas. Multi-tenancy serves various purposes, such as providing Networking as a Service, Firewall as a Service, and more.

How NSX Projects relate to Cloud Director Organizations?

Within the VCD platform, the tenancy is established via Organizations. Each tenant receives its exclusive organization, ensuring a distinct and isolated virtual infrastructure tailored to their tasks. This organizational setup grants precise control over tenant access to resources, empowering them to oversee Users, Virtual Data Centers (VDCs), Catalogs, Policies, and other essentials within their domain.

To clearly outline the tenant structure, VMware NSX introduced a feature known as Projects. These Projects allocate NSX users to distinct environments housing their specific objects, configurations, and monitoring mechanisms based on alarms and logs.

With VCD 10.5.1, management functionalities tied to NSX Tenancy fall within the exclusive purview of the Provider. NSX Tenancy operates on an Organization-specific level within VCD. When activated, a VCD Organization aligns directly with an NSX Project.

VCD drives and manages the creation of the associated NSX project, allowing the User to configure the project identifier. The NSX project is actually created during the creation of the first VDC in the organization for which you activated NSX tenancy. The name of the NSX project is the same as the name of the organization to which it is mapped.

How to enable?

The Cloud Provider can enable the NSX Tenancy for a specific Organization by going into the Cloud Director Organization section, choosing an organization, and selecting “NSX Tenancy”, he/she can also define a Log Name, which will be the Organization’s unique identifier in the backing NSX Manager logs.

The name of the NSX project will be the same as the name of the organization to which it is mapped.

Once NSX tenancy has been activated on the Org level, the Cloud provider can create a new Org VDC and choose to enable “NSX Tenancy”, this is when The NSX project is actually get created in NSX.

NOTE: Network Pool selection is disabled. This is because NSX supports Project creation only in the default overlay Transport Zone. Also, make sure the default overlay Transport zone already exists.

Note: If you choose not to activate NSX tenancy during the creation of an organization VDC, you cannot change this setting later.

When not to choose to enable tenancy?

Some use cases do not require organization VDC participation in NSX tenancy, for example, if the VDC only needs VLAN networks. Additionally, organization VDCs using NSX tenancy are restricted to using the network pool that is backed by the default overlay transport zone, so, in order to be able to use a different network pool, you might wish to opt out of NSX tenancy.

also there are a few features that NSX projects do not support today, like NSX Federation deployments as well as not all Edge Gateway features are available for Networking Tenancy-enabled VDCs like VPNs (IPsec/L2) and sharing segment profile templates, etc.. so work in progress and will see more and more features coming in future.

Conclusion

Aligning NSX Projects with VCD’s Tenancy ensures customers access an extensive array of networking capabilities offered by the NSX Multi-tenancy solution. Among these crucial functionalities is tenant-centric logging for core VCD networking services like Edge Services and Distributed firewalls. Additionally, integrating NSX Projects paves the way to investigate potential enhancements, facilitating tenant self-service login capabilities within VCD features. Below, you can find more information and capabilities.

Managing NSX Tenancy in VMware Cloud Director

VMware Cloud Director 10.5.1 adopts NSX Projects

Deploy, Run, and Manage Any Application with VMware Cloud Director Content Hub

Featured

In today’s fast-paced digital landscape, businesses require agile and scalable solutions to deploy and manage their applications efficiently. VMware Cloud Director Content Hub (VDCH) introduced in VMware Cloud Director 10.5, offers a robust platform for a simplified and automated way to deploy, run, and manage a wide range of applications. In this blog, we’ll explore the key features and benefits of VMware Cloud Director Content Hub and how it streamlines the application deployment process.

What is Content Hub ?

Content Hub, introduced in VMware Cloud Director 10.5, serves as a convenient tool that unifies the interface for accessing both VM and container-based content. This feature seamlessly integrates external content sources, such as VMware Marketplace and Helmchart Repositories, into the VMware Cloud Director environment.

By incorporating external sources, content can be seamlessly brought into the catalogs of VMware Cloud Director, ensuring its immediate availability for users. With the integration of Content Hub, VMware Cloud Director introduces an interface oriented towards applications, enabling users to effortlessly visualize and retrieve catalog content, thus elevating the overall content management experience.

As a result of these enhancements, catalog items are presented as “Application Images,” highlighting crucial application information like the application’s name, version, logo, screenshots, and other pertinent details necessary for the consumption of the application

Integration with VMware Marketplace

To integrate Marketplace with VMware Cloud Director, a Cloud Provider needs to create a Marketplace Connection and then share it with one or more tenant organisations. Here is the procedure:

  • Cloud Provider must be logged in as a system administrator.
  • Cloud Provider must have access to VMware Marketplace and there generated an API token. (see Generate API Tokens in the VMware Cloud Services Product Documentation)
  • From the left panel, select VMware Marketplace & Click New.
  • The New VMware Marketplace dialog opens, and there enter the following values:

Integration with Helm Chart Repository

A Helm chart repository is a named resource that stores all the information required to establish a connection with a Helm chart repository, allows users to browse contents from a remote repository, and imports applications from the repository.

Cloud Providers and tenants both can create Helm chart repository content connections. Tenants need special RIGHTs that providers can assign to specific tenants based on requirements.

  • From the left panel, select Helm Chart Repository & Click New.
  • The New Helm Chart Repository dialog opens, and there enter the following values:

  • Password-protected Helm chart repositories are also supported. 
  • You can create multiple Helm chart repository resources in VMware Cloud Director.

Share a VMware Marketplace/Helm Chart Repository Resource with tenants

As a service provider administrator, you can share the configured VMware Marketplace resources with other tenant organizations.

  • Inside VMware Marketplace/Helm Chat Repository, on the left of the name of the connection that you want to share, click the vertical ellipsis and select Share.
  • Share the resource.
  • Select the tenant organizations you want to share the resource with.
  • You can set the individual access level for the respective tenant organization only as Read Only.
  • Click Save.

NOTE: Since vCD does a DB caching of the helm chart repository items in vCD DB, you can click on Sync which will ensure the latest from the remote repository is cached in vCD DB.

Roles Required

A new set of User Roles has been introduced to facilitate users in accessing and managing the Content Hub. Assigning the appropriate user roles to individuals within the organization to grant them access and utilize the Content Hub feature effectively is crucial.

Right Bundles: A Rights Bundle is a collection of rights for the tenant organization and A Rights Bundle defines what a tenant organization has access to. Rights Bundles are always defined globally, and they can be applied to zero or more tenants.

Roles: A Role is a collection of rights for a user. A Role defines what an individual user has access to. Roles can either be tenant-specific–defined within a single organization and only visible to that organization, or global–defined globally and applied to zero or more tenants. 

Putting all the pieces together looks something like this:

Following are the new RIGHTs introduced to manage CatalogContentSource entities:

Apply the Above Rights for organizations as well as the user, so that they have enough rights to add and deploy applications.

Content Hub Operator for Kubernetes Container Service Extention

To perform this operation, First, you need full control of the Kubernetes cluster, where you are deploying the container applications, and additionally the following VMware Cloud Director rights:

You must install a Kubernetes operator to allow tenant users to deploy container applications from external content sources. From the top navigation bar, select Content Hub, from the left panel, select Kubernetes Operator, and on the Kubernetes Operator page, select the Kubernetes cluster on which you want to install the Kubernetes operator, and click Install Operator.

The Install operator on cluster-name dialog opens, on this page Configure the source location of the Kubernetes operator package.

Select the type of source location for the Kubernetes operator package. VMware Registry location with anonymous access is selected by default.

(Optional) To configure a custom registry, first, you must clone the Content Hub Kubernetes operator package from the VMware container registry to your custom registry. The Content Hub Kubernetes operator package is in Carvel format and you must use the Carvel imgpkg tool for cloning the package.

Click Install Operator.

After a Few minutes it shows “Not Reachable”

And then finally it turns to “ready”, basically After the installation of the operator completes successfully, the system creates two namespaces under the Kubernetes cluster. In the first namespace, “vcd-contenthub-system”, the Content Hub Operator manager is installed. The second namespace, “vcd-contenthub-workloads”, is empty. This namespace is used to deploy container applications at a later stage.

Catalog Versioning

In the previous version of VMware Cloud Director, there was no concept of versioning in catalogs. For instance, when a user imports multiple versions of the same application from external sources into the catalog, each version of the VM or container will be stored and represented (listed) as an individual resource, but from VMware Cloud Director 10.5 onwards, a Virtual Machine application or Container application that was imported with multiple versions either at the same time or at different intervals, Content Hub is capable of managing and structure the versioning of that resource.

Let’s Deploy a Container Application

Upon launching an application image that has multiple versions, you will be prompted to choose the specific version of the image you wish to deploy.

If the Container application is launched, then the Container Application launcher window will open and then you need to provide the application Name, select the version to deploy and select the TKG Cluster.

(Optional) To customize the application parameters, click Show Advanced Settings.

Click Launch Application.

The system deploys the container application. After the deploy operation completes, on the card for the application, the status appears as Deployed. VMware Cloud Director deploys the container application as a Helm release under the vcd-contenthub-workloads namespace to the Kubernetes cluster.

Either clicking on Application Name or DETAILS, it takes user to details of the application, like Status, Pods Status, Access URLs etc..

User can use the access URL details to access the application easily

Deploy an Stateful Container Application

In this section we will deploy Casandra database using Cloud Director Content Hub, process is simple as we followed in above section, once deployed the application, check the details, here details are different like no access URL, it also created required Persistent Volumes automatically

it also give visibility to Stateful Set and its status, secrets if any, services and its type as well as other resources. this allow end user to not to check in K8s clusters vs this info is available in vCD GUI.

Deploy an Virtual Machine Application

A vApp consists of one or more virtual machines that communicate over a network and use resources and services in a deployed environment. A vApp can contain multiple virtual machines.If you have access to a catalog, you can use the vApp templates in the catalog to create vApps.

A vApp template can be based on an OVF file with properties for customizing the virtual machines of the vApp. The vApp inherits these properties. If any of those properties are user-configurable, you can specify their values.

Enter a name and, optionally, a description for the vApp.

Enter a runtime lease and a storage lease for the vApp, and click Next.

From the Storage Policy drop-down menu, select a storage policy for each of the virtual machines in the vApp, and click Next.

If the placement policies and the sizing policies for the virtual machines in the vApp are configurable, select a policy for each virtual machine from the drop-down menu.

If the hardware properties of the virtual machines in the vApp are configurable, customize the size of the virtual machine hard disks and click Next.

If the networking properties of the virtual machines in the vApp are configurable, customize them and click Next.

On the Configure Networking page, select the networks to which you want each virtual machine to connect.

(Optional) Select the check box to switch to the advanced networking workflow and configure additional network settings for the virtual machines in the vApp.

Review the vApp settings and click Finish.

You can see under Applications -> Virtual Applications, VM is getting deployed.

With Content Hub feature provide can offers centralized content management for application images, vApp and VM templates, and media. VMware Cloud Director now has full control over the deployed and listed images, storing all pertinent information in its database. Unlike the application images imported from App Launchpad, where VMware Cloud Director only listed the imported images and App Launchpad stored all the details and information in its own database as well as It offers ease of use since you won’t have to set up and configure App Launchpad, which would otherwise be an additional task.

Infrastructure as Code with VMware Cloud Director

Featured

In today’s fast-paced digital landscape, organizations are constantly seeking ways to optimize their IT operations and streamline infrastructure management. One approach that has gained significant traction is Infrastructure as Code (IaC). By treating infrastructure provisioning and management as code, IaC enables organizations to automate and standardize their processes, resulting in increased efficiency, scalability, and consistency.

In this blog article, we will explore the concept of Infrastructure as Code and its practical implementation using VMware Cloud Director. We will delve into the benefits and challenges of adopting IaC, highlight the features of VMware Cloud Director, and showcase how the integration of Terraform with VMware Cloud Director can revolutionize IT operations for Cloud Providers as well as customers.

What is Infrastructure as Code?

Infrastructure as Code (IaC) is a methodology that involves defining and managing infrastructure resources programmatically using code. It allows organizations to provision, configure, and manage their infrastructure resources, such as compute, network, and storage, through automated processes. By treating infrastructure as code, organizations can leverage the same software development practices, such as version control and continuous integration, to manage their infrastructure.

Benefits of Infrastructure as Code

The adoption of Infrastructure as Code brings numerous benefits to cloud providers as well as customers:

Consistency and repeatability: With IaC, infrastructure deployments become consistent and repeatable, ensuring that the same configuration is applied across different environments. This eliminates configuration drift and minimizes the risk of errors caused by manual configurations.

Efficiency and scalability: IaC enables organizations to automate their infrastructure provisioning and management processes, reducing the time and effort required for manual tasks. This allows IT teams to focus on more strategic initiatives and scale their infrastructure easily as the business demands.

Version control and collaboration: By using code repositories and version control systems, organizations can track changes made to their infrastructure configurations, roll back to previous versions if needed, and collaborate effectively across teams.

Documentation and auditability: IaC provides a clear and documented view of the infrastructure configuration, making it easier to understand the current state and history of the infrastructure. This improves auditability and compliance with regulatory requirements.

Flexibility and portability: With IaC, organizations can define their infrastructure configurations in a platform-agnostic manner, making it easier to switch between different cloud providers or on-premises environments. This provides flexibility and avoids vendor lock-in.

Challenges of Infrastructure as Code

While Infrastructure as Code offers numerous advantages, it also presents some challenges that organizations should be aware of:

Learning curve: Adopting IaC requires IT teams to acquire new skills and learn programming languages or specific tools. This initial learning curve may slow down the adoption process and require investment in training and upskilling.

Code complexity: Infrastructure code can become complex, especially for large-scale deployments. Managing and troubleshooting complex codebases can be challenging and may require additional expertise or code review processes.

Continuous maintenance: Infrastructure code needs to be continuously maintained and updated to keep up with evolving business requirements, security patches, and technology advancements. This requires ongoing investment in code management and testing processes.

Infrastructure as Code Tools

There are various tools available in the market to implement Infrastructure as Code. Some of the popular ones include:

Terraform: Terraform is an open-source tool that allows you to define and provision infrastructure resources using declarative configuration files. It supports a wide range of cloud providers, including VMware Cloud Director, and provides a consistent workflow for infrastructure management.

Chef: Chef is a powerful automation platform that enables you to define and manage infrastructure as code. It focuses on configuration management and provides a scalable solution for infrastructure provisioning and application deployment.

Puppet: Puppet is a configuration management tool that helps you define and automate infrastructure configurations. It provides a declarative language to describe the desired state of your infrastructure and ensures that the actual state matches the desired state.

Ansible: Ansible is an open-source automation tool that allows you to define and orchestrate infrastructure configurations using simple, human-readable YAML files. It emphasizes simplicity and ease of use, making it a popular choice for infrastructure automation.

VMware Cloud Service Provider Program

The VMware Cloud Service Provider Program (CSP) allows service providers to build and operate their own cloud environments using VMware’s virtualization and cloud infrastructure technologies like VMware Cloud Director, Cloud Director availability etc… This program enables service providers to offer a wide range of cloud services, including infrastructure-as-a-service (IaaS), disaster recovery, Application as a Service, Kubernetes as a Service, DBaaS and many other managed services.

Integrating Terraform with VMware Cloud Director

To leverage the power of Infrastructure as Code in VMware Cloud Director, you can integrate it with Terraform. Terraform provides a declarative language for defining infrastructure configurations and a consistent workflow for provisioning and managing resources across different cloud providers, including VMware Cloud Director. By combining Terraform with VMware Cloud Director, Cloud Providers/Customers can:

Automate infrastructure provisioning: With Terraform, Cloud providers can define Tenant virtual data center, virtual machines, networks, and other resources as code. This allows for automated and consistent provisioning of infrastructure resources, eliminating manual configuration. I wrote a blog article on how to start with Cloud Director & Terraform which can be found here:

Ensure consistency and repeatability: Infrastructure configurations defined in Terraform files can be version controlled, allowing for easy tracking of changes and ensuring consistency across different environments.

Collaborate effectively: Terraform code can be shared and collaboratively developed within teams, enabling better collaboration and knowledge sharing among IT professionals.

Enable infrastructure as self-service: By integrating Terraform with VMware Cloud Director, organizations can provide a self-service portal for users to provision and manage their own infrastructure resources, reducing dependency on IT support.

Implementing Infrastructure as Code with VMware Cloud Director and Terraform

To begin with the Terraform configuration, create a main.tf file and specify the version of the VMware Cloud Director Terraform provider

terraform {
  required_providers {
    vcd = {
      source  = "vmware/vcd"
      version = "3.9.0"
    }
  }
}

provider "vcd" {
  url                  = "https://vcd-01a.corp.local/"
  user                 = "admin"
  password             = "******"
  org                  = "tfcloud"
  vdc                  = "tfcloud"
}

In the above configuration, replace the url, user, password, org, and vdc values with your own VMware Cloud Director details.Now, let’s define the infrastructure resources using Terraform code. Here’s an example of creating a organization in VMware Cloud Director:

#Create a new org name "tfcloud"
resource "vcd_org" "tfcloud" {
  name              = "terraform_cloud"
  full_name         = "Org created by Terraform"
  is_enabled        = "true"
  stored_vm_quota   = 50
  deployed_vm_quota = 50
  delete_force      = "true"
  delete_recursive  = "true"
  vapp_lease {
    maximum_runtime_lease_in_sec          = 0
    power_off_on_runtime_lease_expiration = false
    maximum_storage_lease_in_sec          = 0
    delete_on_storage_lease_expiration    = false
  }
  vapp_template_lease {
    maximum_storage_lease_in_sec       = 0
    delete_on_storage_lease_expiration = false
  }
}

Creating Organisation Administrator

#Create a new Organization Admin
resource "vcd_org_user" "tfcloud-admin" {
  org               = vcd_org.tfcloud.name
  name              = "tfcloud-admin"
  password          = "*********"
  role              = "Organization Administrator"
  enabled           = true
  take_ownership    = true
  provider_type     = "INTEGRATED" #INTEGRATED, SAML, OAUTH stored_vm_quota = 50 deployed_vm_quota = 50 }

Creating Org VDC

# Create Org VDC for above org
resource "vcd_org_vdc" "vdc-tfcloud" {
  name = "vdc-tfcloud"
  org  = vcd_org.tfcloud.name
  allocation_model  = "AllocationVApp"
  provider_vdc_name = "vCD-A-pVDC-01"
  network_pool_name = "vCD-VXLAN-Network-Pool"
  network_quota     = 50
  compute_capacity {
    cpu {
      limit = 0
    }
    memory {
      limit = 0
    }
  }
  storage_profile {
    name    = "*"
    enabled = true
    limit   = 0
    default = true
  }
  enabled                  = true
  enable_thin_provisioning = true
  enable_fast_provisioning = true
  delete_force             = true
  delete_recursive         = true
}

Creating Edge Gateway (NAT Gateway)

# Create Org VDC Edge for above org VDC
resource "vcd_edgegateway" "gw-tfcloud" {
  org                     = vcd_org.tfcloud.name
  vdc                     = vcd_org_vdc.vdc-tfcloud.name
  name                    = "gw-tfcloud"
  description             = "tfcloud edge gateway"
  configuration           = "compact"
  advanced                = true
  external_network {
     name = vcd_external_network.extnet-tfcloud.name
     subnet {
        ip_address            = "10.120.30.11"
        gateway               = "10.120.30.1"
        netmask               = "255.255.255.0"
        use_for_default_route = true
    }
  }
}

Here is the blog post which covers Tenant OnBoarding on Cloud Director using Terraform:

Best Practices for Infrastructure as Code with VMware Cloud Director

Implementing Infrastructure as Code with VMware Cloud Director and Terraform requires adherence to best practices to ensure efficient and reliable deployments. Here are some key best practices to consider:

Use version control: Store your Terraform code in a version control system such as Git to track changes, collaborate with teammates, and easily roll back to previous configurations if needed.

Leverage modules: Use Terraform modules to modularize your infrastructure code and promote reusability. Modules allow you to encapsulate and share common configurations, making it easier to manage and scale your infrastructure.

Implement testing and validation: Create automated tests and validations for your infrastructure code to catch any potential errors or misconfigurations before deployment. This helps ensure the reliability and stability of your infrastructure.

Implement a CI/CD pipeline: Integrate your Terraform code with a continuous integration and continuous deployment (CI/CD) pipeline to automate the testing, deployment, and management of your infrastructure. This helps in maintaining consistency and enables faster and more reliable deployments.

Use variables and parameterization: Leverage Terraform variables and parameterization techniques to make your infrastructure code more flexible and reusable. This allows you to easily customize your deployments for different environments or configurations.

Implement security best practices: Follow security best practices when defining your infrastructure code. This includes managing access controls, encrypting sensitive data, and regularly updating your infrastructure to address any security vulnerabilities.

Conclusion

Infrastructure as Code offers a transformative approach to managing infrastructure resources, enabling organizations to automate and streamline their IT operations. By integrating Terraform with VMware Cloud Director, organizations can leverage the power of Infrastructure as Code to provision, manage, and scale their infrastructure efficiently and consistently.

In this blog post, we explored the concept of Infrastructure as Code, the benefits and challenges associated with its adoption. We also provided a step-by-step guide on implementing Infrastructure as Code with VMware Cloud Director using Terraform, along with best practices.

By embracing Infrastructure as Code, Cloud providers and their Tenants can unlock the full potential of their infrastructure resources, accelerate their digital transformation journey, and ensure agility, scalability, and reliability in their IT operations.

Few More Links:

https://developer.vmware.com/samples/7210/vcd-terraform-examples

https://registry.terraform.io/providers/vmware/vcd/latest/docs

https://blogs.vmware.com/cloudprovider/2021/03/terraform-vmware-cloud-director-provider-3-2-0-release.html

Harness the Power of Cloud Director Data Solutions: Offer DBaaS using VMware SQL with MySQL

Featured

In the ever-evolving landscape of cloud technology, Cloud Service Providers (CSPs) play a crucial role in helping businesses unlock the potential of their data. Database as a Service (DBaaS) offerings have become essential tools for organizations looking to streamline their data management processes. This article will explore how our CSP is revolutionizing DBaaS with Cloud Director Data Solutions, powered by VMware SQL with MySQL. Discover how this powerful combination can transform CSP and customers’ data management experience.

VMware Cloud Director Extension for Data Solutions: An Introduction

The VMware Cloud Director Extension for Data Solutions is a game-changing plug-in for the VMware Cloud Director. By incorporating data and messaging services into the VMware Cloud Director portfolio, it enables cloud providers and their tenants to access and manage services such as VMware SQL with MySQL, VMware SQL with PostgreSQL, and the efficient messaging system, RabbitMQ.

The extension operates in conjunction with Container Service Extension 4.0 or later, facilitating the publication of popular data and messaging services to tenants. These services are deployed in a Tanzu Kubernetes Grid Cluster, which is managed by Container Service Extension 4.0 or later. This powerful combination simplifies the process of deploying and managing these services, while also offering data analytics and monitoring through Grafana and Prometheus.

How the VMware Cloud Director Extension for Data Solutions Functions

The VMware Cloud Director Extension for Data Solutions works hand-in-hand with the Container Service Extension 4.x to provide cloud providers with the ability to publish data and messaging services to their tenants. In turn, tenants can utilize these services for building new applications or maintaining existing ones.

Services are deployed within a Tanzu Kubernetes Grid Cluster, which is managed by the Container Service Extension 4.0 or later. This plays a crucial role in the deployment of services, with a service operator installed in a selected tenant Tanzu Kubernetes Cluster responsible for managing the entire lifecycle of a service, from inception to dissolution.

To better understand the architecture of the VMware Cloud Director Extension for Data Solutions, consider the high-level diagram provided below.

Use Cases for the VMware Cloud Director Extension for Data Solutions

Tenants can utilize the VMware Cloud Director Extension for Data Solutions for various purposes, such as creating database and messaging services at scale. Once these services are available in a tenant organization, authorized tenant users can create, upgrade, or delete PostgreSQL, MySQL, or RabbitMQ services. Advanced settings, such as enabling High Availability for VMware SQL with MySQL and VMware SQL with PostgreSQL, can be applied during the creation of a service.

In addition to creating database and messaging services, tenants can effortlessly manage their upgrades. When a newer version becomes available, the VMware Cloud Director Extension for Data Solutions interface will notify the user and prompt them to take action. Tenant administrators or users can then upgrade the chosen service with a single click, safeguarding the service against vulnerabilities and ensuring its stability.

Tenant self-service UI for the lifecycle management of VMware SQL with MySQL

Step:1 – Installing VMware Cloud Director Data Solutions operator

To run VMware SQL with MySQL in a Kubernetes cluster using the VMware Cloud Director extension for Data Solutions, you must install a VMware Cloud Director extension for the Data Solutions operator to a Kubernetes cluster. The VMware Cloud Director Data Solutions operator (DSO) is a backend service running within each tenant Kubernetes cluster. DSO manages the lifecycle of user resources in Kubernetes clusters upon user requests, sent through VMware Cloud Director Resource Defined Entities. The resources include both data solution operators like VMware RabbitMQ operators for Kubernetes and data solution instances like VMware RabbitMQ for Kubernetes. It deploys, upgrades, and updates various data solutions in Kubernetes clusters on behalf of the user.

  • Log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director.
  • Click Settings > Kubernetes Clusters.
  • Select the Kubernetes cluster where you want to run VMware Cloud Director extension for Data Solutions, and click Install Operator.
  • It takes a few minutes for the Operator Status of the cluster to change to Active.

Step:2 – Installing VMware SQL with MySQL

  • Log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director.
  • Click Instances > New Instance.
  • Enter the necessary details.
    • Enter the instance name.
    • Select the solution, for which you want to create an instance.
    • Select the Kubernetes cluster for this instance. 
    • you must enter only a default passwordSelect a sizing template for this instance.
  • To customize more details click Show Advanced Settings
  • To connect to a MySQL instance from outside of your Kubernetes cluster, we must configure the Kubernetes service for the instance to be of type “Load Balancer”. Select “Expose Service by Load Balancer”
  • Click Create.
  • You can also track the progress of the deployment by running:
#kubectl get pods -n vcd-ds-workloads
  • After a few minutes, you should see MySQL status is “Running” in vCD GUI as well
  • You can continue to see progress in the vCD taskbar as well as the Monitor section of vCD GUI, it is automatically creating required persistent volumes as well as Network services like Load Balancer and NAT rules
  • Kubernetes will request and Cloud Director then allocate an external IP address (load balancer IP) to the service, which we can use to connect to the MySQL service from outside the cluster. You can view the Load Balancer IP by running:
#kubectl get svc -A
  • Take note of the External IP, which in this case is 172.16.2.6, we will use this IP to connect to the MySQL cluster from outside

Step:3 – Connecting to VMware SQL with MySQL

To connect to a MySQL service using Workbench, you can follow these steps:

  • Download and install MySQL Workbench from the official MySQL website if you haven’t already.
  • Launch MySQL Workbench on your computer.
  • In the Workbench home screen, click on the “+” icon next to “MySQL Connections” to create a new connection.
  • In the “Setup New Connection” dialog, enter a connection name of your choice.
  • Configure the following settings:
1-Connection Method: Standard TCP/IP
2-MySQL Hostname: External IP address of the MySQL server.in this case is 172.16.2.6
3-MySQL Server Port: Enter the port number(default is 3306).
4-Username: Enter the MySQL username as “mysqlappuser”
5-Password: Enter the MySQL password as you entered while deploying MySQL
  • Click on the “Test Connection” button to check if the connection is successful. If the test is successful, you should see a success message.
  • Click on the “OK” button to save the connection settings.

After following these steps, you should be connected to your MySQL service using MySQL Workbench. You can then use the Workbench interface to view this newly deployed instance, run queries, and perform various read-only database-related tasks, because:

When Tenant creates a MySQL deployment through VMware Cloud Director for Data Solutions extension, the MySQL instance gets a default DB user named “mysqlappuser”, This “mysqlappuser” user’s privilege is limited to the default DB instance. “mysqlappuser” user does not have enough permission to create a database, we need to create a new user which has enough permissions to create a database.

Step:4 – Enable a full-privileged DB user for your MySQL instance provisioned by the VMware Cloud Director Extension for Data Solutions

MySQL by default has a built-in “root” user with administrator privilege in every MySQL deployment, but it doesn’t support external access. We will use this user to create another full-privileged DB user.

  • Find DB root user’s password by using below commands:
# kubectl get secret -n vcd-ds-workloads
#kubectl get secret mysqldb01-credentials -n  vcd-ds-workloads -o jsonpath='{.data}'
  • Take note of “rootPassword” from the output of the above command and copy the full encrypted password, which in this case starts with “RE1n**********”
  • Decode the above “rootPassword” using below command
# echo "RE1nem90TmltZ2laQWdsZC1oqM0VGRw==" | base64 –decode
  • The output of the above command will be the root password, use this password to log in.
  • Enter the primary (writable) pod of your MySQL deployment by command: (Refer to this Link to identify Writable Pod )
# kubectl exec -it mysqldb01-2 -n vcd-ds-workloads -c mysql – bash
  • Now run the below command to connect to the MySQL instance:
1 - Login to SQL 
      #mysql -u root -p$ADMIN_PASSWORD -h 127.0.0.1 -P 3306
2 - Enter decoded password, once you successfully connected to the MySQL
3 - Create a local MySQL user using:
      #CREATE USER 'user01'@'%' IDENTIFIED BY 'password';
      #GRANT ALL PRIVILEGES ON * . * TO 'user01'@'%';
  • That’s it, now we can connect to this SQL Instance and can create a new database as you can see below screenshot

NOTE: Thanks to the writer of this blog, https://realpars.com/mysql/, I am using this database for this blog article.

In Summary, by leveraging VMware Cloud Director Extension for Data Solutions, CSPs can unlock the power of DBaaS using VMware SQL with MySQL, offering customers a comprehensive and efficient platform for their data management needs. With simplified deployment and management, seamless scalability, enhanced performance optimization, high availability, and robust security and compliance features, CSPs can provide businesses with a reliable and scalable DBaaS solution. Embrace the potential of DBaaS with VMware Cloud Director Extension for Data Solutions and VMware SQL with MySQL, and empower your customers with streamlined data management capabilities in the cloud.

Assess Your Sovereign Cloud Stack for Compliance

Featured

VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud is a management pack available in the VMware Marketplace. You can download and install this management pack on an instance of vRealize (ARIA) Operations to automatically assess a Sovereign Cloud stack for compliance. 

VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud is intended to be used by the VMware Cloud Service Partners who are part of the Sovereign Cloud Initiative. The following products in the Sovereign Cloud stack are currently supported for compliance assessment:

  • vSphere
  • NSX-T
  • VMware Cloud Director
  • VMware Cloud Director Availability

For every Sovereign Cloud instance, providers need one instance of vRealize (ARIA) Operations with the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud installed and configured. The compliance score card is available in the Optimize > Compliance screen of vRealize (ARIA) Operations.

Compliance Pack for Sovereign Cloud Controls Rules

The compliance rules are based on a checklist that VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud utilizes to monitor the products in the Sovereign Cloud stack. The checklist is based on the Sovereign Cloud Framework which takes into consideration the following key principles:

Data Sovereignty and Jurisdictional Control

Data should reside locally.
The cloud should be managed and governed locally, and all data processing including API calls should happen within the country/geography.
Data should be accessible only to residents of the same country, and the data should not be accessible under foreign laws or from any outside geography.

Data Access and Integrity

Two data center locations.
File, Block, and Object store options
Backup services, Disaster Recovery
Low-latency connectivity, Micro segmentation

Data Security and Compliance

Industry recognized Security Controls (minimum ISO/IEC 27001 or equivalent)
Additional relevant industry or governmental certifications
Third-party audits & Zero Trust Security & Encryption
Catalog of trusted images using the sovereign repository
Support for air gapped zones/regions
Operating personnel requirements and security clearance

Data Independence and Interoperability

Workload migration with bi-directional workload portability
Modern application architecture using containers
Support for hybrid cloud deployments

Control Rules and Product Control Set for vSphere

The vSphere control set is available for version 7, and 6.5/6.7 separately and details can be Here

Control Rules and product control set for nsx-t

Control rules and product control set for NSX-T and details can be found Here. The NSX-T version supported is greater than or equal to 3.2.x.

Controls Rules and Product Control Set for VMware Cloud Director

Control rules and product control set for VMware Cloud Director and details can be found Here. The supported version of the VMware Cloud Directory management pack is 8.10.2.

Control Rules and Product Control Set for VMware Cloud Director Availability

Controls rules and product control set for VMware Cloud Director Availability and details can be found Here. The version of the VMware Cloud Director Availability management pack supported is 1.2.1.

Install the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud

The VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud consists of a PAK file that contains default contains views, reports, alerts and symptoms for the VMware software in the Sovereign Cloud stack.

  • Download the PAK file for VMware vRealize Operations Compliance Pack for Sovereign Cloud from the VMware Marketplace, and save the file to a temporary folder on your local system.
  • Log in to the vRealize (ARIA) Operations user interface with administrator privileges. Installation of this management pack is to be done by VMware Cloud Provider Program Partners.
  • In the left pane of vRealize (ARIA) Operations, click Integrations under Data Sources.
  • In the Repository tab, click the ADD button.
  • The Add Solution dialog box opens , Click BROWSE to locate the temporary folder on your system, and select the PAK file.
  • Read and select the checkboxes if required, Click Upload. The upload might take several minutes
  • Read and accept the EULA, and click Next. Installation details appear in the window during the process.
  • When the installation is completed, click Finish.

in the vROps instance, go to Optimize > Compliance. In the VMware Cloud tab, you can see the VMware Sovereign Cloud Compliance card in the VMware Sovereign Cloud Benchmarks section. Click Enable.

When you click Enable, a list of policies appears. You must select a policy that you want to apply. i am selecting default policy here

With the vRealize (ARIA) Operations reporting functions, you can generate a report to view the compliance status of your Sovereign Cloud. You can download the report in a PDF or CSV file format for future and offline needs.

Accessing Compliance Reports

In the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud two kinds of reports are available:

At a VMware Cloud Provider Program Partner level – This report considers all the infrastructure level resources and generates a report at the org level, showing non-compliance. The compliance data is for the org and associated child hierarchy. Includes compliance details for org, org-vdc, virtual machines, logical switches, and logical routers as per the hierarchy.

From the left menu, click Visualize > Reports and run the report – [vCloud Director] – VMware Sovereign Cloud – Non-Compliance Report.

From the Reports panel, click Generated Reports, To select a generated report from the list, click the vertical ellipsis against the [vCloud Director] – VMware Sovereign Cloud – Non-Compliance Report report and select options such as run and delete.

At a tenant level – The VMware Cloud Provider Program Partner generates a filtered report that the tenant can access. This report is at an org-VDC level. The compliance data is for the child hierarchy only. Includes compliance details for virtual machines, logical switches, and logical routers as per the hierarchy and Tenants can view the generated reports in the VMware Chargeback console by logging in and navigating to Reports > Tenant Reports and clicking the Generated Reports tab.

Custom Benchmarks

In case the CSP partner/customer requires they can create a custom compliance benchmark to ensure that objects comply with compliance alerts available in vRealize (ARIA) Operations, or custom compliance alert definitions. When a compliance alert is triggered on your vCenter instance, hosts, virtual machines, distributed port groups, or distributed switches, you investigate the compliance violation. You can add up to five custom compliance scorecards

This is the first version of the sovereign cloud compliance pack for ARIA Operations for our Cloud Providers brings a comprehensive compliance checklist that encompasses the sovereign framework as a benchmark and continuously validates applications and infrastructure to help partners maintain their sovereignty posture. This brings extended capabilities to ARIA Operations, which, now addition to the capacity, cost, and performance monitoring, will monitor and report compliance drifts to the right stakeholders to better manage their compliance structure.

Security VDC Architecture with VMware Cloud Director

Featured

​Cloud Director VDCs come with all the features you’d expect from a public cloud, Virtual Data Center is a logical representation of a physical data center, created using virtualization technologies and a virtual data center allows IT administrators to create, provision, and manage virtualized resources such as servers, storage, and networking in a flexible and efficient manner. Recently released new version of VMware Cloud Director 10.4.1 released quite a lot of new features. In this article I want to double click on to external networking…

External Networks

An external network is a network that is external to the VCD infrastructure, such as a physical network or a virtual network, external networks are used to connect virtual machines in VCD with the external world, allowing them to communicate with the Internet or with other networks that are not part of the VCD infrastructure

New Features in Cloud Director 10.4.1 External Networks

With the release of Cloud Director 10.4.1, External networks that are NSX-T Segment backed (VLAN or overlay) can now be connected directly to Org VDC Edge and does not require routed through Tier-0 or VRF the Org VDC Gateway is connected to. This connection is done via the service interface on the service node of the Tier-1 GW that is backing the Org VDC Edge GW. The Org VDC Edge GW still needs a parent Tier-0/VRF although it can be disconnected from it. here are some of the use cases some of the use cases of the external network we are going to discusses…

  • Transit connectivity across multiple Org VDC Edge Gateways to route between different Org VDCs
  • Routed connectivity via dedicated VLAN to tenant’s co-location physical servers
  • Connectivity towards Partner Service Networks
  • MPLS connectivity to direct connect while internet is accessible via shared provider gateway

Security VDC Architecture using External Networks (transit connectivity across two Org VDC Edge Gateways)

A Security VDC is a common strategy for connecting multiple VDCs and security VDC become single egress and ingress points as well as can deploy additional firewall etc, in below section i am showing how that can be achieved using new external network feature:

  • This is using Overlay (Geneve) backed external network
  • ​This Overlay network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set default route to the segment backed external network you need to use two more specific routes. For example:
  • Ø0.0.0.0/1 next hop <IP> scope <network>
  • Ø128.0.0.0/1 next hop <IP> scope <network>

Security VDC Architecture using External Networks – Multi-VDC

  1. Log in to the VMware Cloud Director Service Provider Admin Portal.
  2. From the top navigation bar, select Resources and click Cloud Resources.
  3. In the left pane, click External Networks and click New.

On the Backing Type page, select NSX-T Segments and a registered NSX Manager instance to back the network, and click Next.

Enter a name and a description for the new external network

Select an NSX segment from the list to import and click Next. An NSX segment can be backed either by a VLAN transport zone or by an overlay transport zone

  1. Configure at least one subnet and click Next.
    1. To add a subnet, click New.
    2. Enter a gateway CIDR.
    3. To enable, select the State checkbox.
    4. Configure a static IP pool by adding at least one IP range or IP address.
    5. Click Save.
    6. (Optional) To add another subnet, repeat steps Step A to Step E.

Review the network settings and click Finish.

Now provider will need to go to tenant org/vdc and add above configured external network in to tenant tier1 edge and offer net new networking configuration and options.

Other Patterns

Routed connectivity via dedicated VLAN to tenant’s co-location physical servers or MPLS

  • This is using vLAN backed external network
  • ​This vLAN backed network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​One VLAN segment, it can be connected to only one Org VDC Edge GW
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set default route to the segment backed external network. For example:
  • Ø172.16.50.0/24 next hop <10.10.10.1> scope <external network>

Connectivity towards Partner Service Networks

  • This is using vLAN backed external network
  • ​This vLAN backed network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​One VLAN segment, it can be connected to only one Org VDC Edge GW
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set static route to the segment backed external network you need to use two more specific routes. For example:
  • Ø<Service Network> next hop <Service Network Router IP> scope <External Network>

DOWNLOAD PDF from Here

i hope this article helps providers offer net new additional network capabilities to your tenants. please feel free to share feedback.

Getting Started with VMware Cloud Director Container Service Extension 4.0

Featured

VMware Cloud Director Container Service Extension brings Kubernetes as a service to VMware Cloud Director, offering multi-tenant, VMware supported, production ready, and compatible Kubernetes services with Tanzu Kubernetes Grid. As a service provider administrator, you can add the service to your existing VMware Cloud Director tenants. By using VMware Cloud Director Container Service Extension, customers can also use Tanzu products and services such as Tanzu® Mission Control to manage their clusters.

Pre-requisite for Container Service Extension 4.0

  • Provider Specific Organization – Before you can configure VMware Cloud Director Container Service Extension server, it is must to create an organization to hosts VMware Cloud Director Container Service Extension server
  • Organization VDC within Organization – Container Service extension Appliance will be deployed in this organization virtual data center
  • Network connectivity – Network connectivity between the machine where VMware Cloud Director Container Service Extension is installed, and the VMware Cloud Director server. VMware Cloud Director Container Service Extension communicates with VMware Cloud Director using VMware Cloud Director public API endpoint
  • CSE 4.0 CPI automatically creates Load balancer, you must ensure that you have configured  NSX Advanced Load Balancer, NSX Cloud, and NSX Advanced Load Balancer Service Engine Group for tenants who need to create Tanzu Kubernetes Cluster.

Provider Configuration

With the release of VMware Cloud Director Container Service Extension 4.0, service providers can use the CSE Management tab in the Kubernetes Container Clusters UI plug-in, which demonstrate step by step process to configure the VMware Cloud Director Container Service Extension server.

Install Kubernetes Container Clusters UI plug-in for VMware Cloud Director

You can download the Kubernetes Container Clusters UI plug-in for the VMware Cloud Director Download Page and upload the plug-in to VMware Cloud Director.

NOTE: If you have previously used the Kubernetes Container Clusters plug-in with VMware Cloud Director, it is necessary to deactivate it before you can activate a newer version, as only one version of the plug-in can operate at one time in VMware Cloud Director. Once you activate a new plug-in, it is necessary to refresh your Internet browser to begin using it.

Once partner has installed plugin, The Getting Started section with in CSE Management page help providers to learn and set up VMware Cloud Director Container Service Extension in VMware Cloud Director through the Kubernetes Container Clusters UI plug-in 4.0. At very High Level this is Six Step process:

Lets start following these steps and deploy

Step:1 – This section links to the locations where providers can download the following two types of OVA files that are necessary for VMware Cloud Director Container Service Extension configuration:

NOTE- Do not download FIPS enabled templates

Step:2 – Create a catalog in VMware Cloud Director and upload VMware Cloud Director Container Service Extension OVA files that you downloaded in the step:1 into this catalog

Step:3 – This section initiates the VMware Cloud Director Container Service Extension server configuration process. In this process, you can enter details such as software versions, proxy information, and syslog location. This workflow automatically creates a Kubernetes Clusters rights bundle, CSE Admin Role role, Kubernetes Cluster Author role, and VM sizing policies. In this process, the Kubernetes Clusters rights bundle and Kubernetes Cluster Author role are automatically published to all tenants as well as following Kubernetes resource versions will be deployed

Kubernetes ResourcesSupported Versions
Cloud Provider Interface (CPI)1.2.0
Container Storage Interface (CSI)1.3.0
CAPVCD1.0.0

Step:4 – This section links to the Organization VDCs section in VMware Cloud Director, where you can assign VM sizing policies to customer organization VDCs. To avoid resource limit errors in clusters, it is necessary to add Tanzu Kubernetes Grid VM sizing policies to organization virtual data centers.The Tanzu Kubernetes Grid VM sizing policies are automatically created in the previous step. Policies created are as below:

Sizing PolicyDescriptionValues
TKG smallSmall VM sizing policy2 CPU, 4 GB memory
TKG mediumMedium VM sizing policy2 CPU, 8 GB memory
TKG largeLarge VM sizing policy4 CPU, 16 GB memory
TKG extra-largeX-large VM sizing policy8 CPU, 32 GB memory

NOTE: Providers can create more policies manually based on requirement and publish to tenants

In VMware Cloud Director UI, select an organization VDC, and from the left panel, under Policies, select VM Sizing and Click Add and then from the data grid, select the Tanzu Kubernetes Grid sizing policy you want to add to the organization, and click OK.

Step:5 – This section links to the Users section in VMware Cloud Director, where you can create a user with the CSE Admin Role role. This role grants administration privileges to the user for VMware Cloud Director Container Service Extension administrative purposes. You can use this user account as OVA deployment parameters when you start the VMware Cloud Director Container Service Extension server.

Step:6 – This section links to the vApps section in VMware Cloud Director where you can create a vApp from the uploaded VMware Cloud Director Container Service Extension server OVA file to start the VMware Cloud Director Container Service Extension server.

  • Create a vApp from VMware Cloud Director Container Service Extension server OVA file.
  • Configure the VMware Cloud Director Container Service Extension server vApp deployment lease
  • Power on the VMware Cloud Director Container Service Extension server.

Container Service Extension OVA deployment

Enter a vApp name, optionally a description, runtime lease and storage lease (should be no lease so that it does not suspend automatically), and click Next.

Select a virtual data center and review the default configuration for resources, compute policies, hardware, networking, and edit details where necessary.

  • In the Custom Properties window, configure the following settings:
    • VCD host: VMware Cloud Director URL
    • CSE service account’s username The username: CSE Admin user in the organization
    • CSE service account’s API Token: to generate API Token, login to provider session with CSE user which you created in Step:5 and then go to “User Preferences” and click on “NEW” in “ACCESS Tokens” section (When you generate an API Access Token, you must copy the token, because it appears only once. After you click OK, you cannot retrieve this token again, you can only revoke it. )

  • CSE service account’s org: The organization that the user with the CSE Admin role belongs to, and that the VMware Cloud Director Container Service Extension server deploys to.
  • CSE service vApp’s org: Name of the provider org where CSE app will be deployed

In the Virtual Applications tab, in the bottom left of the vApp, click Actions > Power > Start. this completes the vApp creation from the VMware Cloud Director Container Service Extension server OVA file. This task is the final step for service providers to perform before the VMware Cloud Director Container Service Extension server can operate and start provisioning Tanzu Kubernetes Clusters.

CSE 4.0 is packed with capabilities that address and attract developer personas with an improved feature set and simplified cluster lifecycle management. Users can now build and upgrade versions, resize, and delete K8s clusters directly from the UI making it simpler and faster to accomplish tasks than before. This completes provider section of Container Service extension in next blog post i will write about Tenant workflow.

Multi-Tenant Tanzu Data Services with VMware Cloud Director

Featured

VMware Cloud Director extension for VMware Data Solutions is a plug-in for VMware Cloud Director (VCD) that enables cloud providers expand their multi-tenant cloud infrastructure platform to deliver a portfolio of on-demand caching, messaging and database software services at massive scale. This brings in new opportunity for our Cloud Providers to offer additional cloud native developer services in addition to the VCD powered Infrastructure-as-a-Service (IaaS).

VMware Cloud Director extension for Data Solutions offers a simple tenant-facing self-service UI for the lifecycle management of below Tanzu data services with a single view across multiple instances, and with URL to individual instances for service specific management.

Tenant Self-Service Access to Data Solutions

Tenant users can access VMware Cloud Director extension for Data Solutions from VMware Cloud Director tenant portal

before tenant user can deploy any of the above solution, he/she must need to prepare their Tanzu K8s clusters deployed by CSE, basically when you click on Install Operator for a Kubernetes cluster for VMware Cloud Director extension for Data Solutions, Data Solutions operator is automatically installed to this cluster and this Data Solution Operator is for life cycle management of data services, to install operator simple log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director and then:

  1. Click Settings > Kubernetes Clusters
  2. Select the Kubernetes cluster on which you want to deploy Data Services
  3. and click Install Operator.

It takes a few minutes for the status of the cluster to change to Active.

Deploy a Tanzu Data Services instance

Go to Solutions and choose your required solution and click on “Launch”

This will take you to “Instances” page there , enter the necessary details.

  • Enter the instance name.
  • Solution should have RabbitMQ selected
  • Select the Kubernetes cluster ( You can only select cluster which has Data Solutions Operator successfully installed
  • Select a solution template (T-Shirt sizes)

To customize, for example, to configure the instance RabbitMQ Management Console or Expose Load Balancer for AMQP click Show Advanced Settings and select appropriate option.

Monitor Instance Health using Grafana

Tanzu Kubernetes Grid provides cluster monitoring services by implementing the open source Prometheus and Grafana projects. Tenant can use the Grafana Portal to get insights about the state of the RabbitMQ nodes and runtime. For this to work, Grafana must be installed on CSE 4 Tanzu Cluster.

NOTE: Follow this link for Prometheus and Grafana installation on CSE Tanzu K8s clusters.

Connecting to RabbitMQ

Since during the deployment, i have exposed RMQ as “Expose Load Balancer for AMQP”, if you take a look in vcd load balancer configuration CSE automatically exposed RMQ as load balancer VIP and a NAT rule get created, so that you can access it from outside.

Provider Configuration

Before you start using VMware Cloud Director extension for Data Solutions, you must meet certain prerequisites:

  1. VMware Cloud Director version 10.3.1 or later.
  2. Container Service Extension version 4.0 or later to your VMware Cloud Director.
  3. A client machine with MacOS or Linux, which has a network connectivity to VMware Cloud Director REST endpoint.
  4. Verify that you have obtained a VMware Data Solutions account.

Detailed instruction of installing VMware Cloud Director extension for VMware Data Solutions detailed Here.

VMware Cloud Director extension for VMware Data Solutions comes with zero additional cost to our cloud providers. Please note that the extension does not come with a cost, however, cloud providers need to report their service consumption of Data Services which do carry a cost.

VMware Cloud Director Charge Back Explained

Featured

VMware Chargeback not only enables metering and chargeback capabilities, but also provides visibility into infrastructure usage through performance and capacity dashboards for the Cloiud Providers as well as tenants.

To help Cloud Providers and tenants realise more value for every dollar they spend on infrastructure (ROI) (and in turn provide similar value to their tenants), our focus is to not only expand the coverage of services that can be priced in VMware Chargeback, but also to provide visibility into the cost of infrastructure to providers, and billing summary to organizations, clearly highlighting the cost incurred by various business units. but before we dive in further to know what’s new with this release, please note:

  • vRealize Operations Tenant App is now rebranded to VMware Chargeback.
  • VMware Chargeback is also now available as a SaaS offering, The Software-as-a-Service (SaaS) offering will be available as early access, with limited availability, with the purchase or trial of the VMware Cloud Director™ service. See, Announcing VMware Chargeback for Managed Service Providers Blog.

Creation of pricing policy based on chargeback strategy

Provider administrator can create one or more pricing policies based on how they want to chargeback their tenants. Based on the vCloud Director allocation models, each pricing policy is of the type, Allocation pool, Reservation pool, or Pay-As-You-Go

NOTE – The pricing policies apply to VMs at a minimum granularity of five minutes. The VMs that are created and deleted within the short span of five minutes will still be charged.

CPU Rate

Provider can charge the CPU rate based on GHz or vCPU Counts

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the number of vCPUs used
  • Fixed Cost Fixed costs do not depend on the units of charging

Memory Rate

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied, values are: Usage, Allocation and Maximum from usage and allocation
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the memory allocated
  • Fixed Cost Fixed costs do not depend on the units of charging

Storage Rate

You can charge for storage either based on storage policies or independent of it.

  • This way of setting rates will be deprecated in the future release and it is advisable to instead use the Storage Policy option.
  • Select the Storage Policy Name from the drop-down menu.
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied. You can charge for used storage or configured storage of the VMs
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Add Slab you can optionally charge different rates depending on the storage allocated

Network Rate

Enter the External Network Transmit and External Network Receive rates.

Note: If your network is backed by NSX-T, you will be charged only for the network data transmit and network data receive.

  • Network Transmit Rate select the Change Period and enter the Default Base Rate as well as using slabs, you can optionally charge different rates depending on the network data consumed
  • Network Receive Rate select the Change Period and enter the Default Base Rate. as well as using slabs, you can optionally charge different rates depending on the network data consumed. Enter valid numbers for Base Rate Slab and click Add Slab.

Advanced Network Rate

Under Edge Gateway Size, enter the base rates for the corresponding edge gateway sizes

  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

Guest OS Rate

Use the Guest OS Rate to charge differently for different operating systems

  • Enter the Guest OS Name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

Cloud Director Availability

Cloud Director Availability is to set pricing for replications created from Cloud Director Availability

  • Replication SLA Profile – enter a replication policy name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

You can also charge for the storage consumed by replication objects in the Storage Usage Charge section.This is used to set additional pricing for storage used by Cloud Director Availability replications in Cloud Director. Please note that the storage usage defined in this tab will be added additionally to the Storage Policy Base Rate

vCenter Tag Rate

This section is used for Any additional charges to be applied on the VMs based on their discovered Tags from vCenter. (Typical examples are Antivirus=true, SpecialSupport=true etc)

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

VCD Metadata Rate

Use the VCD Metadata Rate to charge differently for different metadata set on vApps

NOTE- Metadata based prices are available in bills only if Enable Metadata option is enabled in vRealize Operations Management Pack for VMware Cloud Director.

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

One Time Fixed Cost

One time fixed cost used to charge for One time incidental charges on Virtual machines, such as creation/Setup charges, or charges for one off incidents like installation of a patch. These costs do not repeat on a recurring basis.

For values follow VCD METADATA and vCenter Tag section.

Rate Factors

Rate factors are used to either bump up or discount the prices either against individual resources consumed by the Virtual Machines, or whole charges against the Virtual Machine. Some examples are:

  • Increase CPU rate by 20% (Factor 1.2) for all VMs tagged with CPUOptimized=true
  • Discount overall charge on VM by 50% (Factor 0.5) for all Vms tagged with PromotionalVM=True
  • VCD Metadata
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number
  • vCenter Tag
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number

Tanzu Kubernetes Clusters

This section will be used to charge for Tanzu K8s clusters and objects.

  • Cluster Fixed Cost
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost Fixed costs do not depend on the units of charging
  • Cluster CPU Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per ghz)
  • Cluster Memory Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per gb)

Additional Fixed Cost

You can use Additional Fixed Cost section to charge at the Org-VDC level. You can use this for charges such as overall tax, overall discounts, and so on. The charges can be applied to selective Org-VDCs based on Org-VDC metadata.

  • Fixed Cost
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost
  • VCD Metadata – enter the Tag Key and Tag Value
  • VCD Metadata One Time – enter the Tag Key and Tag Value

Apply Policy

Cloud Director Charge Back provides flexibility to the Service Providers to map the created pricing policies with specific organization vDC. By doing this, the service provider can holistically define how each of their customers can be charged based on resource types.

Bills

Every tenant/customer of service provider can see/review their bills using the Cloud Director Charge Back app. Service Provider administrator can generate bills for a tenant by selecting a specific resource and a pricing policy that must be applied for a defined period and can also log in to review the bill details.

This completes the feature demonstration available with Cloud Director Charge back. Go ahead and deploy and add native charge back power to your Cloud. 

AI/ML with VMware Cloud Director

Featured

AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries.

Why is AI/ML important?

it’s no secret that data is an increasingly important business asset, with the amount of data generated and stored globally growing at an exponential rate. Of course, collecting data is pointless if you don’t do anything with it, but these enormous floods of data are simply unmanageable without automated systems to help.

Artificial intelligence, machine learning and deep learning give organizations a way to extract value out of the troves of data they collect, delivering business insights, automating tasks and advancing system capabilities. AI/ML has the potential to transform all aspects of a business by helping them achieve measurable outcomes including:

  • Increasing customer satisfaction
  • Offering differentiated digital services
  • Optimizing existing business services
  • Automating business operations
  • Increasing revenue
  • Reducing costs

As modern applications become more prolific, Cloud Providers need to address the increasing customer demand for accelerated computing that typically requires large volumes of multiple, simultaneous computation that can be met with GPU capability.

Cloud Providers can now leverage vSphere support for NVIDIA GPUs and NVIDIA AI Enterprise (a cloud-native software suite for the development and deployment of AI and has been optimized and certified for VMware vSphere), This enables vSphere capabilities like vMotion from within Cloud Director to now deliver multi-tenancy GPU services which are key to maximizing GPU resource utilization. With Cloud Director support for the NVIDIA AI Enterprise software suite, customers now have access to best-in-class, GPU optimized AI frameworks and tools and to deliver compute intensive workloads including artificial intelligence (AI) or machine learning (ML) applications within their datacenters.

This solution with NVIDIA takes advantage of NVIDIA MIG (Multi-instance GPU) which supports spatial segmentation between workloads at the physical level inside a single device and is a big deal for multi-tenant environments driving better optimization of hardware and increased margins. Cloud Director is reliant on host pre-configuration for GPU services included in NVIDIA AI Enterprise which contains vGPU technology to enable deployment/configuration on hosts and GPU profiles.

Customers can self serve, manage and monitor their GPU accelerated hosts and virtual machines within Cloud Director. Cloud Providers are able to monitor (through vCloud API and UI dashboard) NVIDIA vGPU allocation, usage per VDC and per VM to optimize utilization and meter/bill (through vCloud API) NVIDIA vGPU usage averaged over a unit of time per tenant for tenant billing.

Provider Workflow

  • Add GPU devices to ESXi hosts in vCenter and install required drivers. 
  • Verify vGPU profiles are visible by going in to vCD provider portal → Resources → Infrastructure Resources → vGPU Profiles
  • Edit vGPU profiles to provide necessary tenant facing instructions and a tenant facing name to each vGPU profile. (Optional)
  • Create a PVDC backed by one or more clusters having GPU hosts in vCenter.
  • In provider portal → Cloud Resources → vGPU Policies → Create a new vGPU policy by following the wizards steps.

Tenant Workflow

When you create a vGPU policy, it is not visible to tenants. You can publish a vGPU policy to an organization VDC to make it available to tenants.

Publishing a vGPU policy to an organization VDC makes the policy visible to tenants. The tenant can select the policy when they create a new standalone VM or a VM from a template, edit a VM, add a VM to a vApp, and create a vApp from a vApp template. You cannot delete a vGPU policy that is available to tenants.

  • Publish the vGPU policy to one or more tenant VDCs similar to the way we publish sizing and placement policies.
  • Create a new VM or instantiate a VM from template. In vGPU enabled VDCs, tenants can now select a vGPU policy

Cloud Director not only allows for VMs but providers can also leverage cloud director’s Container Service Extension to offer GPU enabled Tanzu Kubernetes Clusters.

Step-by-Step Configuration

Below video covers step-by-step process of configuring provider and tenant side of configuration as well as deploying Tensor flow GPU in to a VM.

Cloud Director OIDC Configuration using OKTA IDP

Featured

OpenID Connect (OIDC) is an industry-standard authentication layer built on top of the OAuth 2.0 authorization protocol. The OAuth 2.0 protocol provides security through scoped access tokens, and OIDC provides user authentication and single sign-on (SSO) functionality. For more refer here (https://datatracker.ietf.org/doc/html/rfc6749). There are two main types of authentication that you can perform with Okta:

  • The OAuth 2.0 protocol controls authorization to access a protected resource, like your web app, native app, or API service.
  • The OpenID Connect (OIDC) protocol is built on the OAuth 2.0 protocol and helps authenticate users and convey information about them. It’s also more opinionated than plain OAuth 2.0, for example in its scope definitions.

So If you want to import users and groups from an OpenID Connect (OIDC) identity provider to your Cloud Director system (provider) or Tenant organization, you must configure provider/tenant organization with this OIDC identity provider. Imported users can log in to the system/tenant organization with the credentials established in the OIDC identity provider.

We can use VMware Workspace ONE Access (VIDM) or any public identity providers, but make sure OAuth authentication endpoint must be reachable from the VMware Cloud Director cells.in this blog post we will use OKTA OIDC and configure VMware Cloud to use this OIDC for authentication.

Step:1 – Configure OKTA OIDC

For this blog post, i created an developer account on OKTA at this url –https://developer.okta.com/signup and once account is ready, follow below steps to add cloud director as an application in OKTA console:

  • In the Admin Console, go to Applications > Applications.
  • Click Create App Integration.
  • To create an OIDC app integration, select OIDC – OpenID Connect as the Sign-in method.
  • Choose what type of application you plan to integrate with Okta, in Cloud Director case Select Web Application.
  • App integration name: Specify a name for Cloud Director
  • Logo (Optional): Add a logo to accompany your app integration in the Okta org
  • Grant type: Select from the different grant type options
  • Sign-in redirect URIs: The Sign-in redirect URI is where Okta sends the authentication response and ID token for the sign-in request, in our case for provider https://<vcd url>/login/oauth?service=provider and incase if you are doing it for tenant then use https://<vcd url>/login/oauth?service=tenant:<org name>
  • Sign-out redirect URIs: After your application contacts Okta to close the user session, Okta redirects the user to this URI.
  • AssignmentsControlled access: The default access option assigns and grants login access to this new app integration for everyone in your Okta org or you can choose to Limit access to selected groups

Click Save. This action creates the app integration and opens the settings page to configure additional options.

The Client Credentials section has the Client ID and Client secret values for Cloud Director integration, Copy both the values as we enter these in Cloud Director.

The General Settings section has the Okta Domain, for Cloud Director integration, Copy this value as we enter these in Cloud Director.

Step:2 – Cloud Director OIDC Configuration

Now I am going to configure OIDC authentication for provider side of cloud provider and with very minor changes (tenant URL) it can be configured for tenants too.

Let’s go to Cloud Director and from the top navigation bar, select Administration and on the left panel, under Identity Providers, click OIDC and click CONFIGURE

General: Make sure that OpenID Connect  Status is active, and enter the client ID and client secret information from the OKTA App registration which we captured above.

To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL, for OKTA the URL would look this – https://<domain.okta.com>/.well-known/openid-configuration and click on NEXT

Endpoint: Clicking on NEXT will populate “Endpoint” information automatically, it is however, essential that the information is reviewed and confirmed. 

Scopes: VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information.enter the scope information, and click Next.

Claims: You can use this section to map the information VMware Cloud Director gets from the user info endpoint to specific claims. The claims are strings for the field names in the VMware Cloud Director response

This is the most critical piece of configuration. Mapping of this information is essential for VCD to interpret the token/user information correctly during the login process.

For OKTA developer account, user name is email id, so i am mapping Subject to email as below

Key Configuration:

OIDC uses a public key cryptography mechanism.A private key is used by the OIDC provider to sign the JWT Token and it can be verified by a 3rd party using the public keys published on the OIDC provider’s well-known URL.These keys form the basis of security between the parties. For security to be maintained, this is required to keep the private keys protected from any cyber-attacks.One of the best practices that has been identified to secure the keys from being compromised is known as key rollover or key Refresh.

From VMware Cloud Director 10.3.2 and above, if you want VMware Cloud Director to automatically refresh the OIDC key configurations, turn on the Automatic Key Refresh toggle.

  • Key Refresh Endpoint should get populated automatically as we choose auto discovery.
  • Select a Key Refresh Strategy.
    • AddPreferred option, add the incoming set of keys to the existing set of keys. All keys in the merged set are valid and usable.
    • Replace – Replace the existing set of keys with the incoming set of keys.
    • Expire After – You can configure an overlap period between the existing and incoming sets of keys. You can configure the overlapping time using the Expire Key After Period, which you can set in hourly increments from 1 hour up to 1 day.

If you did not use Configuration Discovery in Step 6, upload the private key that the identity provider uses to sign its tokens and click on SAVE

Now go to Cloud Director, under Users, Click on IMPORT USERS and choose Source as “OIDC” and add user which is there in OKTA and Assign Role to that user, thats it.

Now you can logout from the vCD console and try to login again, Cloud Director automatically redirects to OKTA and asks for credential to validate.

Once the user is authenticated by Okta, they will be redirected back to VCD and granted access per rights associated with the role that was assigned when the user was provisioned.

Verify that the Last Run and the Last Successful Run are identical. The runs start at the beginning of the hour. The Last Run is the time stamp of the last key refresh attempt. The Last Successful Run is the time stamp of the last successful key refresh. If the time stamps are different, the automatic key refresh is failing and you can diagnose the problem by reviewing the audit events. (This is only applicable if Automatic Key Refresh is enabled. Otherwise, these values are meaningless)

Bring on your Own OIDC – Tenant Configuration

For tenant configuration, i have created a video, please take a look here, Tenant can bring their own OIDC and self service in cloud director tenant portal.

This concludes the OIDC configuration with VMware Cloud Director. I would like to Thank my colleague Ankit Shah, for his guidance and review of this document.

Windows Bare Metal Servers on NSX-T overlay Networks

Featured

In this post, I will configure Windows 2016/2019 bare metal server as an transport node in NSX-T and then also will configure a NSX-T overlay segment on a Windows 2016/2019 server bare metal server, which allow VM and bare metal server on the same network to communicate.

To use NSX-T Data Center on a windows physical server (Bare Metal server), let’s first understand few terminologies which we will use in this post.

  • Application – represents the actual application running on the physical server server, such as a web server or a data base server.
  • Application Interface – represents the network interface card (NIC) which the application uses for sending and receiving traffic. One application interface per physical server server is supported.
  • Management Interface – represents the NIC which manages the physical server server.
  • VIF – the peer of the application interface which is attached to the logical switch. This is similar to a VM vNIC.

Now lets configure our windows server to operate in NSX overlay environment:

Enable WinRM service on Windows 2019

First of all we need to enable Windows Remote Management (WinRM) on Windows Server 2016/2019 to allow the Windows server to interoperate with third-party software and hardware. To enable the WinRM service with a self-signed certificate, Run:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
PS$ wget -o ConfigureWinRMService.ps1 https://raw.github.com/vmware/bare-metal-server-integration-with-nsxt/blob/master/bms-ansible-nsx/windows/ConfigureWinRMService.ps1
PS$ powershell.exe -ExecutionPolicy ByPass -File ConfigureWinRMService.ps1.

Run the following command to verify the configuration of WinRM listeners:

winrm e winrm/config/listener

NOTE- For production bare metal servers, please enable winrm with HTTPS for security reasons and procedure is explained here

Installing NSX-T Kernel Module on Windows 2019 Server

Now let’s proceed with installing the NSX kernel module on the Windows Server 2016/2019 bare metal server. Make sure to download NSX kernel module for Windows server 2016/2019 with the same version of your NSX-T instance from VMware downloads

Start the installation of the NSX kernel module by executing the .exe file on your Windows BM server.

Configure the bare metal server as a transport node in NSX-T

Before we add the bare metal server as a transport node, we must need to create a new uplink profile in NSX-T that we are going to use for the bare metal servers. An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

In my Lab the windows 2016/2019 bare metal server will have two network adapters, one NIC in the management VLAN and the other one is on a TEP VLAN (VLAN160).

once uplink profile is configured, We can now proceed with adding the Windows 2016/2019 bare metal server as transport node into NSX-T. In the NSX-T web page go to System –> Fabric –> Nodes and click on +ADD

Enter Management Interface IP address of your windows bare metal host and its credential, and do not change the Installation Location, it will validate your credentials against windows BM and then will allow you to move next

On the next screen, choose virtual switch name or leave it default, select overlay transport zone as we are connecting this to overlay and select uplink profile and management uplink interface.

on the next screen, configure IP address, Subnet and GW for TEP interface, this could be using specifying static IP list or choosing an IP pool which belongs to TEP VLAN.

Click on Next , This will start preparing your Winodws BM for NSX-T

​Once preparation/config completed, we can attach segment from above screen or we can Continue Later, lets click on “Continue Later” for now, we will add in different step.

Now if you see your windows BM in NSX-T console, it is ready for NSX-T and asking us to attach an overlay segment.

Attach Overlay Segement

Select host in the “Host Transport Nodes” section and click on “Action” and then click on “Manage Segment” which takes you to same screen that SELECT SEGMENT would have during original deployment

now select which segment the Application Interface for the Physical Server will reside on and click on “ADD SEGMENT PORT”

​Add Segment Port and Attach Application Interface

On the add Segment port screen:

Choose Assign New IP (This will be your application IP on Windows BM) – > NSX Interface Name (Default is “nsx-eth”) – This is Application Interface Name on Physical Server

Default Gateway –> Provide – T0 or T1 Gateway address

IP Assignment – i am doing Static, but you can also do DHCP or IP Pool for application interface.

Save –Once save is pressed, configuration is sent to Physical Server and you can see on physical server that Application IP has been assigned to an virtual interface.

Now you can see host config in NSX-T Manager console, everything is green and showing up.

Now if you see you can reach to this Bare Metal from a VM with IP address “172.16.20.101” which is on the same segment as this physical server without doing bridging.

if you click on windows server , you can see other information and specifically the “Geneve Tunnels” between ESXi host on which VM is running and Windows BM host on which your application is running.

This completes the configuration, this gives customers/partners an opportunity to run VM and Bare Metal servers on same network and security (like micro-segmentation) can be managed from single console that is NSX-T console. i hope this helps. please share your feedback 🙂

Quick Tip – Delete Stale Entries on Cloud Director CSE

Featured

Container Service Extension (CSE) is a VMware vCloud Director (VCD) extension that helps tenants create and work with Kubernetes clusters.CSE brings Kubernetes as a Service to VCD, by creating customized VM templates (Kubernetes templates) and enabling tenant users to deploy fully functional Kubernetes clusters as self-contained vApps.

Due to any reason, if tenant’s cluster creation stuck and it continue to show “CREATE:IN_PROGRESS” or “Creating” for many hours, it means that the cluster creation has failed for unknown reason, and the representing defined entity has not transitioned to the ERROR state .

Solution

To fix this, provider admin need to get in to API and delete these stale entries, there are very few simple steps to clean those stale entries.

First – Let’s get the “X-VMWARE-VCLOUD-ACCESS-TOKEN” for API calls by calling below API call:

  • https://<vcd url>/cloudapi/1.0.0/sessions/provider
  • Authentication Type: Basic
  • Username/password – <adminid@system>/<password>

Above API call will return “X-VMWARE-VCLOUD-ACCESS-TOKEN”, inside header section of response window. copy this token and use as “Bearer” token in subsequent API calls.

Second – we need to get the “Cluster ID” of the stale cluster which we want to delete, and to get “Cluster ID” – Go in to Cloud Director Kubernetes Container Extension and click on cluster which is stuck and get Cluster IP in URN Format.

Third (Optional) – Get the cluster details using below API call and using authentication using Bearer token , which we first step:

Get  https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Fourth – Delete the stale cluster using below API call by providing “ClusterID“, which we captured in second step and using authenticate type a “Bearer Token

Delete https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Above API call should respond with “204 No Content”, it means API call has been executed sucessfully.

Now if you login to Cloud Director “Kubernetes Container Cluster” extension, above API call must have deleted the stale/stuck cluster entry

Now you can go to Cloud Director vAPP section and see if any vAPP/VM is running for that cluster , shutdown that VM and Delete it from Cloud Director. simple three API calls to complete the process.

VMware Cloud Director Assignable Storage Policies to Entity Types

Featured

Service providers can use storage policies in VMware Cloud Director to create a tiered storage offering like: Gold, Silver and Bronze or even offer dedicated storage to tenants. With the enhancement of storage policies to support VMware Cloud Director entities, Now providers has the flexibility to control how tenant use the storage policies. Providers can have not only tiered storage, but isolated storage for running VMs, containers, edge gateways, Catalog and so on.
A common use case that this Cloud Director 10.2.2 update addresses is the need for shared storage across clusters or offering lower cost storage for non-running workloads. For example, instead of having a storage policy with all VMware Cloud Director entities, you can break your storage policy into a “Workload Storage Policy” for all your running VMs and containers, and dedicate a “Catalog Storage Policy” for longer term storage. A slower or low cost NFS option can back the “Catalog Storage Policy”, while the “Workload Storage Policy”  can run on vSAN.

Starting with VMware Cloud Director 10.2.2, if Provider do not want a provider VDC storage policy to support certain types of VMware Cloud Director entities, you can edit and limit the list of entities associated with the policy, here is the list of supported entity types:

  • Virtual Machines – Used for VMs and vApps and their disks
  • VApp/VM Templates – Used for vApp Templates
  • Catalog Media – Used for Media inside catalogs
  • Named Disks – Used for Named disks
  • TKC – Used for TKG Clusters
  • Edge Gateways – Used for Edge Gateways

You can limit the entity types that a storage policy supports to one or more types from this list. When you create an entity, only the storage policies that support its type are available.

Use Case – Catalog-Only Storage Policy

There are many use cases of using assignable storage policies, i am demonstrating this one because many of providers has asked this feature in my discussions. so for this use case we will take entity type – Media, VApp Template

Adding Media, VApp Template entity types to a storage policy marks a storage policy as being able to be used with VDC catalogs.These entity types can be added at the PVDC storage policy layer. Storage policies that are associated with datastores that are intended for catalog-only storage can be marked with these entity types, to force only catalog-related items in to catalog only storage datastore.

When added: VCD users will be able to use this storage policy with Media/Templates. In other words, tenants will see this storage policy as an option when pre-provisioning their catalogs on a specific storage policy.

  • In Cloud Director provider portal, select Resources and click Cloud Resources.
  • select Provider VDCs, and click the name of the target provider virtual data center.
  • Under Policies, select Storage
  • Click the radio button next to the target storage policy, and click Edit Supported Types.
  • From the Supports Entity Types drop-down menu, select Select Specific Entities.
  • Select the entities that you want the storage policy to support, and click Save.

Validation

Lets validate this functionality by login as a tenant and go into “Storage Policies” settings and here we can see this Org VDC has two storage Policy assigned by provider.

Now lets deploy a virtual machine in the same org VDC and you can see that policy “WCP” which was marked as catalog only is not available for VM provisioning.

In the same Org VDC lets create a new “Catalog”, here you can see both the policies are visible, one exclusively for “Catalog” and another one which is allowed for all entity types.

Policy Removal: VCD users will not be able to use this storage policy with Media/Templates but whatever is there will continue to be there.

This addition to Cloud Director gives opportunity to providers to manage storage based on entity type , This is the one use case similarly one particular storage can be used for Edge Placement , another one could be used to spin up production grade Tanzu Kubernetes Cluster while default storage can be used by CSE native Kubernetes cluster for development container workloads. This opens up new Monetization opportunities for provider, upgrade your Cloud Director environment and start monetizing.

This Post is also available as Podcast

Auto Scale Applications with VMware Cloud Director

Featured

Starting with VMware Cloud Director 10.2.2, Tenants can auto scale applications depending on the current CPU and memory utilization. Depending on predefined criteria for the CPU and memory use, VMware Cloud Director can automatically scale up or down the number of VMs in a selected scale group.

Cloud Director Scale Groups are a new top level object that tenants can use to implement automated horizontal scale-in and scale-out events on a group of workloads. You can configure auto scale groups with:

  • A source vApp template
  • A load balancer network
  • A set of rules for growing or shrinking the group based on the CPU and memory use

VMware Cloud Director automatically spins up or shuts down VMs in a scaling group based on above three settings.This blog post will help you to enable Scale Goups in Cloud Director and also we will configure Scale Groups.

Configure Auto Scale

Login to Cloud Director 10.2.2 cell with admin/root credential and eanble metric data collection and publishing by setting up the metrics collection in a Cassandra database or collect metrics without metrics data persistence. in this post we are going to configure without metrics data persistence, to collect metrics data without data persistence, run the following commands:

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.collect.only -v true 

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.publishing.enabled -v true

Now the second step is to create a file named “metrics.groovy” file in the /tmp folder of cloud director appliance with the following contents:

configuration {
    metric("cpu.ready.summation") {
        currentInterval=20
        historicInterval=20
        entity="VM"
        instance=""
        minReportingInterval=300
        aggregator="AVERAGE"
    }
}

Change the file permission appropriatly and import the file using below command:

$VCLOUD_HOME/bin/cell-management-tool configure-metrics --metrics-config /tmp/metrics.groovy

Lets enable auto scalling plugin by running below commands on Cloud Director Cell:

$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set enabled=true
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set username=<username> (should having admin priveldge account)
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --encrypt --set password=<password> ( Password for the above account)

If you see in the Cloud Director provider console, under customize portal , there will be a plugin which provider can enable to tenants who needs auto scale functionality for the applications or can be made available to all tenants.

Since Auto Scale released with VMware Cloud Director 10.2.2, which allows service providers to grant tenant rights to create scale groups.Under Tenant Access Control, select Rights Bundles and then select the “vmware:scalegroup Entitlement” bundle, and click Publish.

Also ensure provider Add the necessary VMWARE:SCALEGROUP rights to the tenant roles that want to use scale groups.

Tenant Self Serive

On the Tenant portal after login, select Applications and select the Scale Groups tab and Click New Scale Group.

Select an organization VDC in which tenant want to create the scale group.

Or Tenant can also access scale groups from a selected organization virtual data center (VDC) by going in to specific organization VDC

  • Enter a name and a description of the new scale group.
  • Select the minimum and maximum number of VMs to which you want the group to scale and click Next.

Select a VM template for the VMs in the scale group and a storage policy, and click Next. Template needs to be pre-populated in catalog by tenant or can be by provider and published for tenants.

Next step is to select a network for the scale group.If Tenants oVDC is backed by NSX-T Data Center and NSX ALB (AVI) has been published as a load balancer by provider, then tenant can choose NSX ALB load Balancer edge on which load balacing has been enabled and Server pool has been setup by tenant before enabling Scale Groups.

If tenant want to manage the load balancer on your own or if there is no need for a load balancer, thenn select I have a fully set-up network, Auto Scale will automatically adds VM to this network.

VMware Cloud Director starts the initial expansion of the scale group to reach the minimum number of VMs, besically it will start creating a VM from template that tenant selected while creating scale group and it will continue to spin up to minimum number VM spacified in the scale group section.

Add an Auto Scaling Rule

  1. Click Add Rule.
  2. Enter a name for the rule.
  3. Select whether the scale group must expand or shrink when the rule takes effect.
  4. Select the number of VMs by which you want the group to expand or shrink when the rule takes effect.
  5. Enter a cooldown period in minutes after each auto scale in the group.The conditions cannot trigger another scaling until the cooldown period expires and cooldown period resets when any of the rules of the scale group takes effect.
  6. Add a condition that triggers the rule.The duration period is the time for which the condition must be valid to trigger the rule. To trigger the rule, all conditions must be met.
  7. To add another condition, click Add Condition. tenant can add multiple conditions.
  8. Click Add.

From the details view of a scale group, when you select Monitor, you can see all tasks related to this scale group. For example, you can see the time of creation of the scale group, all growing or shrinking tasks for the group, the rules that initiated the tasks, in side Virtual Machine section which VM has been created at what time and what is the IP address of the VM etc…

here is the section showing scale takes which trigered and its status.

Virtual Machine section as said provides scale group VM information anf its details.

here is the another section showing scale takes which trigered and thier status , start time and completion time.

Auto Scale Group in Cloud Director 10.2.2 brings very important functionality nativily to cloud director which does not require any external configuration like vRealize Orchestrator or vRops and does not incure addional cost to tenant or provider. go ahead uprade your cloud director , enable it and let the tenant enjoy this cool functionality.

Getting Started with Tanzu Basic

In the process of modernize your data center to run VMs and containers side by side, Run Kubernetes as part of vSphere with Tanzu Basic. Tanzu Basic embeds Kubernetes in to the vSphere control plane for the best administrative control and user experience. Provision clusters directly from vCenter and run containerized workloads with ease. Tanzu basic is the most affordable and has below components as part of Tanzu Basic:

To Install and configure Tanzu Basic without NSX-T, at high level there are four steps which we need to perform and I will be covering all the steps in three blog posts:

  1. vSphere7 with a cluster with HA and DRS enabled should have been already configured
  2. Installation of VMware HA Proxy Load Balancer – Part1
  3. Tanzu Basic – Enable Workload Management – Part2
  4. Tanzu Basic – Building TKG Cluster – Part3

Deploy VMware HAProxy

There are few topologies to setup Tanzu Basic with vSphere based networking, for this blog we will deploy the HAProxy VM with three virtual NICs, which means there will be one “Management” network , one “Workload” Network and another one will be “frontend” network which will be used by DevOps users and external services will also access HAProxy through virtual IPs on this Frontend network.

NetworkUse
ManagementCommunicating with vCenter and HA Proxy
WorkloadIP assigned to Kubernetes Nodes
Front EndDevOps uses and External Services

For This Blog, I have created three VLAN based Networks with below IP ranges:

NetworkIP RangeVLAN
tkgmgmt192.168.115/24115
Workload192.168.116/24116
Frontend192.168.117/24117

Here is the topology diagram , HAProxy has been configured with three nics and each nic is connected to VLAN that we created above

NOTE– if you want to deep dive on this Networking refer Here , This blog post describe it very nicely and I have used the same networking schema in this Lab deployment.

Deploy VMware HA Proxy

This is not common HA Proxy, it is customized one and its Data Plane API designed to enable Kubernetes workload management with Project Pacific on vSphere 7.VMware HAProxy deployment is very simple, you can directly access/download OVA from Here and follow same procedure as you follow for any other OVA deployment on vCenter, there are few important things which I am covering below:

On Configuration screen , choose “Frontend Network” for three NIC deployment topology

Now Networking section which is heart of the solution, on this we align above created port groups to map to the Management, Workload and Frontend networks.

Management network is on VLAN 115, this is the network where the vSphere with Tanzu Supervisor control plane VMs / nodes are deployed.

Workload network is on vLAN 166, where the Tanzu Kubernetes Cluster VMs / nodes will be deployed.

Front end network which is on vLAN 117, this is where the load balancers (Supervisor API server, TKG API servers, TKG LB Services) are provisioned. Frontend network and workload network must need to route to each other for the successful wcp enablement.

Next page is most important and here we will have VMware HAProxy appliance configuration. Provide a root password and tick/untick for root login option based on your choice. The TLS fields will be automatically generated if left blank.

In the “network config” section, provide network details about the VMware HAProxy for the management network, the workload network and frontend/load balancer network. These all require static IP addresses, in the CIDR format. You will need to specify a CIDR format that matches the subnet mask of your networks.

For Management IP: 192.168.115.5/24 and GW:192.168.115.1

For Workload IP: 192.168.116.5/24 and GW:192.168.116.1

For Frontend IP: 192.168.117.5/25 and GW:192.168.117.1 . this is not optional if you had selected Frontend in “configuration” section.

In Load Balancing section, enter the Load Balancer IP ranges. these IP address will be used as Virtual IPs by the load balancer and these IP will come from Frontend network IP range.

Here I am specifying 192.168.117.32/27 , this segment will give me 30 address for VIPs for Tanzu management plane access and application exposed for external consumption.Ignore “192.168.117.30” in the image back ground.

Enter Data plane API management port. Enter: 5556 and also enter a username and password for the load balancer data plane API

Finally review the summary and click finish. this will deploy VMware HAProxy LB appliance

Once deployment completed, power on the appliance and SSH in to the VM using the management plane IP and check if all the interfaces are having correct IPs:

Also check if you can ping Front end ip ranges and other Ip ranges also. stay tuned for Part2.

Load Balancer as a Service with Cloud Director

Featured

NSX Advance Load Balancer’s (AVI) Intent-based Software Load Balancer provides scalable application delivery across any infrastructure. AVI provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based approach.

With the release of Cloud Director 10.2 , NSX ALB is natively integrated with Cloud Director to provider self service Load Balancing as a Service (LBaaS) where providers can release load balancing functionality to tenants and tenants consume load balancing functionality based on their requirement. In this blog post we will cover how to configure LBaaS.

Here is High Level workflow:

  1. Deploy NSX ALB Controller Cluster
  2. Configure NSX-T Cloud
  3. Discover NSX-T Inventory,Logical Segments, NSGroups (ALB does it automatically)
  4. Discover vCenter Inventory,Hosts, Clusters, Switches (ALB does it automatically)
  5. Upload SE OVA to content library (ALB does it automatically, you just need to specify name of content library)
  6. Register NSX ALB Controller, NSX-T Cloud and Service Engines to Cloud Director and Publish to tenants (Provider Controlled Configuration)
  7. Create Virtual Service,Pools and other settings (Tenant Self Service)
  8. Create/Delete SE VMs & connect to tenant network (ALB/VCD Automatically)

Deploy NSX ALB (AVI) Controller Cluster

The NSX ALB (AVI) Controller provides a single point of control and management for the cloud. The AVI Controller runs on a VM and can be managed using its web interface, CLI, or REST API but in this case Cloud Director.The AVI Controller stores and manages all policies related to services and management. To ensure AVI controllers High Availability we need to deploy 3 AVI Controller nodes to create a 3-node AVI Controller cluster.

Deployment Process is documented Here & Cluster creation Process is Here

Create NSX-T Cloud inside NSX ALB (AVI) Controller

NSX ALB (AVI) Controller which uses APIs to interface with the NSX-T manager and vCenter to discover the infrastructure.here is high level activities to configure NSX-T Cloud in NSX ALB management console:

  1. Configure NSX-T manager IP/URL (One per Cloud)
  2. Provide admin credentials
  3. Select Transport zone (One to One Mapping – One TZ per Cloud)
  4. Select Logical Segment to use as SE Management Network
  5. Configure vCenter server IP/URL (One per Cloud)
  6. Provide Login username and password
  7. Select Content Library to push SE OVA into Content Library

Service Engine Groups & Configuration

Service Engines are created within a group, which contains the definition of how the SEs should be sized, placed, and made highly available. Each cloud will have at least one SE group.

  1. SE Groups contain sizing, scaling, placement and HA properties
  2. A new SE will be created from the SE Group properties
  3. SE Group options will vary based upon the cloud type
  4. An SE is always a member of the group it was created within in this case NSX-T Cloud
  5. Each SE group is an isolation domain
  6. Apps may gracefully migrate, scale, or failover across SEs in the groups

​Service Engine High Availability:

Active/Standby

  1. VS is active on one SE, standby on another
  2. No VS scaleout support
  3. Primarily for default gateway / non-SNAT app support
  4. Fastest failover, but half of SE resources are idle

​Elastic N + M 

  1. All SEs are active
  2. N = number of SEs a new Virtual Service is scaled across
  3. M = the buffer, or number of failures the group can sustain
  4. SE failover decision determined at time of failure
  5. Session replication done after new SE is chosen
  6. Slower failover, less SE resource requirement

Elastic Active / Active 

  1. All SEs are active
  2. Virtual Services must be scaled across at least 2 Service engines
  3. Session info proactively replicated to other scaled service engines
  4. Faster failover, require more SE resources

Cloud Director Configuration

Cloud Director Configuration is two fold, Provider Config and Tenant Config, lets first cover provider Config…

Provider Configuration

Register AVI Controller: Provider administrator login as a admin and register AVI Controller with Cloud Director. provider has option to add multiple AVI controllers.

NOTE – incase if you are registering with NSX ALB’s default self sign certificate and if it throws error while registering , then regenerate self sign certificate in NSX ALB.

Register NSX-T cloud

Now next thing is we need to register NSX-T cloud with Cloud Director, which we had configured in ALB controller:

  1. Selecting one of the registered AVI Controller
  2. Provide a meaning full name to the controller
  3. Select the NSX-T cloud which we had registered in AVI
  4. Click on ADD.

Assign Service Engine groups

Now register service engine groups either “Dedicated” or “Reserved” based on tenant requirement or provider can have both type of groups and assign to tenant based on requirements.

  1. Select NSX-T Cloud which we had registered above
  2. Select the “Reservation Model”
    1. Dedicated Reservation Model:- For each tenant Organization VDC Edge gateway, AVI will create two Service Engine nodes for each LB enabled Org VDC Edge GW.
    2. Shared Reservation Model:- Shared is elastic and shared among all tenants. AVI will create pool of service engines that are going to be shared across tenant. Capacity allocation is managed in VCD, Avi elastically deploys and un-deploys service engines based on usage

Provider Enables and Allocates resources to Tenant

Provider enables LB functionality in the context of Org VCD Edge by following below steps:

  1. Click on Edges 
  2. Choose Edge on which he want to enable load balancing
  3. Go to “Load Balancer” and click on “General Settings”
  4. Click on “Edit”
  5. Toggle on to Activate to activate the load balancer
  6. Select Service Specification

Next step is to assign Service Engines to tenant based on requirement, for that go to Service Engine Group and Click on “ADD” and add one of the SE group which we had registered previously to customer’s one of the Edge.

Provider can restrict usage of Service Engines by configuring:

  1. Maximum Allowed: The maximum number of virtual services the Edge Gateway is allowed to use.
  2. Reserved: The number of guaranteed virtual services available to the Edge Gateway.

Tenant User Self Service Configuration

Pools: Pools maintain the list of servers assigned to them and perform health monitoring, load balancing, persistence.

  1. Inside General Settings some of the key settings are:
    1. Provide Name of the Pool
    2. Load Balancing Algorithm
    3. Default Server Port
    4. Persistence
    5. Health Monitor
  2. Inside Members section:
    1. Add Virtual Machine IP addresses which needs to be load balanced
    2. Define State, Port and Ratio
    3. SSL Settings allow SSL offload and Common Name Check

Virtual Services: A virtual service advertises an IP address and ports to the external world and listens for client traffic. When a virtual service receives traffic, it may be configured to:

  1. Proxy the client’s network connection.
  2. Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.
  3. Forward the client’s request data to the destination pool for load balancing.

Tenant choose Service Engine Group which provider has assigned to tenant, then choose Load Balancer Pool which we created in above step and most important Virtual IP This IP address can be from External IP range of the Org VDC or if you want Internal IP , then you can use any IP.

So in my example, i am running two virtual machines having Org VDC Internal IP addresses and VIP is from external public IP address range, so if I browse VIP , i can reach to web servers sucessfully using VCD/AVI integration.

This completes basic integration and configuration of LBaaS using Cloud Director & NSX Advance Load Balancer. feel free to share feedback.

VMware Cloud Director Two Factor Authentication with VMware Verify

In this post, I will be configuring  two-factor authentication (2FA) for VMware Cloud Director using Workspace ONE Access formally know as VMware Identity Manager (vIDM). Two-factor authentication is a mechanism that checks username and password as usual, but adds an additional security control before users are authenticated. It is a particular deployment of a more generic approach known as Multi-Factor Authentication (MFA).Throughout this post, I will be configuring VMware Verify as that second authentication.

What is VMware Verify ?

VMware Verify is built in to Workspace ONE Access (vIDM) at no additional cost, providing a 2FA solution for applications.VMware Verify can be set as a requirement on a per app basis for web or virtual apps on the Workspace ONE launcher OR to login to Workspace ONE to view your launcher in the first place. The VMware Verify app is currently available on iOS and Android.VMware Verify supports 3 methods of authentication:

  1. OneTouch approval
  2. One-time passcode via VMware Verify app (soft token)
  3. One-time passcode over SMS

By using VMware Verify, security is increased since a successful authentication does not depend only on something users know (their passwords) but also on something users have (their mobile phones), and for a successful break-in, attackers would need to steal both things from compromised users.

1. Configure VMware Verify

First you need to download and install “VMware Workspace ONE Access“, which is very simple to deploy using ova. VMware Verify is provided as-a-service, and thus, it does not require to install anything on-premise server. To enable VMware Verify, you must contact VMware support. They will provide you a security token which is all you need to enable the integration with VMware Workspace One Access (vIDM).Once you get the token, login into vIDM as an admin user and go to:

  1. Click on the Identity & Access Management tab
  2. Click on the Manage button
  3. Select Authentication Methods
  4. Click on the configure icon (pencil) next to VMware Verify
    1. undefined
  5. A new window will pop-up, on which you need to select the Enable VMware Verify checkbox, enter the security token provided by VMware support, and click on Save.
    1. undefined
    2. undefined

2. Create a Local Directory on VMware Workspace One Access

VMware Workspace ONE Access not only supports Active Directories , LDAP Directories but also supports other types of directories, such as local directories and Just-in-Time directories.For this Lab , i am going to create a local directory using local directory feature of Workspace One Access,Local users are added to a local directory on the service. we need to manage the local user attribute mapping and password policies. You can create local groups to manage resource entitlements for users.

  1. Select the Directories tab
  2. Click on “Add Directory”
    1. undefined
  3. Specify Directory and domain name (this is same domain name i have registered for VMware Verify
    1. undefined

3. Create/Configure a built-in Identity Provider

Once the second authentication factor is enabled as described on steps 1 and 2, it must next be added as an authentication method to a Workspace one access built-in provider. If in your environment already exists one, you can re-configure it. Alternatively, you can create a new built-in identity provider as explained below.Login to Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Identity Providers link
  4. Click on the Add Identity Provider button and select Create Built-in IDP
    1. undefined
  5. Enter a name describing the Identity Provider (IdP)
  6. Which users can authenticate using the IdP – In the example below I am selecting the local directory that i had created above.
  7. Network ranges from which users will be directed to the authentication mechanism described on the IdP
  8. The authentication methods to associate with this IdP – Here I am selecting VMware Verify as well as Local Directory.
  9. Finally click on the Add button
    1. undefined

4. Update Access Policies on Workspace One Access

The last configuration step on Workspace One Access (vIDM) is to update the default access policy to include the second factor authentication mechanism. For that, login into Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Policies link
  4. Click on the Edit Default Policy button
  5. This will open up new page showing the details of the default access policy. Go to “Configuration” and Click on “ALL RANGES”.

A new window will pop-up. Modify the settings right below the line “then the user may authenticate using:”

  1. Select Password as the first authentication method – This way users will have to enter their ID and password as defined on the configured Local Directory
  2. and Select second authentication mechanism. here I am adding VMware Verify – This will make that after a successful password authentication, users will get a notification on their mobile phones to accept or deny the login request.
  3. I am leaving empty the line “If preceding Authentication Method fails or is not applicable, then:” – This is because I don’t want to configure any fallback authentication mechanism or you can choose based on your choice.

5. Download the app in your mobile and register a user from an Cloud Director Organization

  1. Access the app provider on your mobile phone. Search for VMware Verify and download it.
    1. undefined
  2. Once it is downloaded, open the application. It will ask for your mobile number and e-mail address. Enter your domain details. On the screenshot below, I’m providing my mobile number and an e-mail which is only valid in my lab.After clicking OK, you will be provided two options for verifying your identity:
    1. undefined
  1. Receiving and SMS message – SMS will have registration code which will allow you to enter in to APP along with registration code.
    1. undefined
  2. Receiving a Phone Call – after clicking on this option, the app will show a registration code you will need to type on the phone pad once you receive the call
  3. Since i am using SMS way to doing it , it will ask you to Enter the code which you have received in SMS Manually (XopRcVjd4u2)
    1. undefinedundefined
  4. Once your identity has been verified, you will be asked to protect the app by setting a PIN number. After that, the app will show there are not accounts configured yet.
    1. undefinedundefined
  5. Click on Account and add the account
    1. undefinedundefined

Immediately after that, we will start receiving tokens on the VMware Verify mobile app. so at this moment, you are ready to move to the next step.

6. Enable VMware Cloud Director Federation with VMware Workspace ONE Access

There are three authentication methods that are supported by vCloud Director:

Local: local users which are created at the time of installing vCD or while creating any new organization.

LDAP service:  LDAP service enables the organisations to use their own LDAP servers for authentication. Users can then be imported into vCD from the configured LDAP.

SAML Identity Provider: A SAML Identity Provider can be used to authenticate users in organisations. SAML v2.0 metadata is required for the service to be configured. The metadata must include the location of the single sign-on service, the single logout service, and the X.509 certificate for the service. In this post we will be using federation between VMware Workspace One Access with VMware Cloud Director.

So, let’s go ahead and login to VMware Cloud Director Organization and go to “Administration” and Click on “SAML”

  1. Enable Federation by setting “Entity ID” to any other unique string , in this case i am setting “org name” , in my case my org name is “abc”
  2. Then click on “Generate” to generate a new certificate and click “SAVE”
    1. undefined
  3. Download Metadata from the link , It will download file “spring_saml_metadata.xml“. This activity can be performed by system or Org Administrator.
    1. undefined
  4. In VMware Workspace ONE Access(VIDM) admin console, go to “Catalog” and create new web application.
    1. Write application name, description and upload nice icon and choose category.
    1. undefined
  5. In the next screen keep Authentication Type SAML 2.0 and paste the xml metadata downloaded in step #1 into the URL/XML window. Scroll down to Advanced Properties. 
    1. undefined 
  6. In Advanced Properties we will keep the defaults but add Custom Attribute Mappings which describe how VIDM user attributes will translate to VCD user attributes. Here is the list:
    1. undefined
  7. Now we can finish the wizard by clicking next, select access policy (keep default) and reviewing the Summary on the next screen.
    1. undefined
  8. Next we need to retrieve metadata configuration of VIDM – this is by going back to Catalog and clicking on Settings. From SAML Metadata download Identity Provider (IdP) metadata.
    1. undefined
  9. Now we can finalize SAML configuration in vCloud Director. on Federation page Toggle Use SAML Identity Provider button to enable it and import the downloaded metadata (idp.xml) with Browse and Upload buttons and click Apply.
    1. undefined
  10.  we first need to import some users/groups to be able to use SAML. You can import VMware Workspace ONE Access(VIDM) users by their user name or group. We can also assign role to the imported user.
    1. undefined

This completes the federation process between VMware Workspace ONE Access (VIDM) and VMware Cloud Director. For More details you can refer This Blog Post.

Result – Cloud Director Two Factor Authentication in Action

Lets your tenant go to browser and browse their tenant URL, they will get atomically redirected to VMware Workspace ONE Access page for authentication:

  1. User enters user name and password and if user get successfully authenticated , if moves to 2FA
  2. on the next step, user gets a notification on thier mobile phones
    1. undefined
  3. Once user approves the authentication on the phone , VMware Workspace ONE Access allows access to user based on the role given on VMware Cloud Director.

On-Board a New User

  1. Create a new User in VMware Workspace and also add him to application access.
    1. undefined
    2. undefined
  2. User gets an email to setup his/her password, user must configure his/her password.
    1. undefined
  3. Administrator login to Cloud Director and Import newly created user from SAML with a Cloud Director role
    1. undefined
    2. undefined
  4. User browses cloud URL and after user logs in to portal with user id and password, he/she asked to provide mobile number for second factor authentication.
    1. undefinedundefined
  5. After entering mobile number , if user has installed “VMware Verify” app , he/she get notification for Approve/Deny or if app has not been installed , click on “Sign in with SMS” , user will receive an SMS , enter that SMS for second factor authentication.
    1. undefinedundefinedundefined
  6. Once user enters the passcode received on his/her cell phone, VMware Workspace One Access allow user to login to cloud director.
    1. undefined

This completes the installation and configuration of VMware Verify with VMware Cloud Director. you can add additional things like branding of your cloud etc.. which will give this your cloud identity.

Ingress on Cloud Director Container Service Extension

In this blog post i will be deploying Ingress controller  along with Load Balancer (LB was deployed in previous post) in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

11

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Deployment Options for provider specific details)
  • kubectl configured with admin access to your cluster
  • RBAC must be enabled on your cluster

Install Contour

To install Contour, Run:

  • #$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  • 1

This command creates:

  • A new namespace projectcontour
  • A Kubernetes Daemonset running Envoy on each node in the cluster listening on host ports 80/443
  • Now we need to retrieve the external address of the load balancer assigned to Contour by our Load Balancer that we deployed in previous port. to get the LB IP run this command:
    • 2
  • “External IP” is of the range to IP addresses that we had given in LB config , we will NAT this IP on VDC Edge gateway to access this from outside or internet.
    • 3

Deploy an Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is hosted at Github , can be downloaded from Here. Once downloaded:

  • Create the coffee and the tea deployments and services using
  • Create a secret with an SSL certificate and a key
  • Create an Ingress resource

This completes the deployment of the application.

Test the Application

To access the application, browse the coffee and the tea services from your desktop which has access to service network. you will also need to add hostname/ip in to /etc/hosts file or your DNS server

  • To get Coffee:
    • 8
  • If your prefer Tea:
    • 10

This completes the installation and configuration of Ingress on VMware Cloud Director Container Service Extension, Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here.