VMware Cloud Director Two Factor Authentication with VMware Verify

Featured

In this post, I will be configuring  two-factor authentication (2FA) for VMware Cloud Director using Workspace ONE Access formally know as VMware Identity Manager (vIDM). Two-factor authentication is a mechanism that checks username and password as usual, but adds an additional security control before users are authenticated. It is a particular deployment of a more generic approach known as Multi-Factor Authentication (MFA).Throughout this post, I will be configuring VMware Verify as that second authentication.

What is VMware Verify ?

VMware Verify is built in to Workspace ONE Access (vIDM) at no additional cost, providing a 2FA solution for applications.VMware Verify can be set as a requirement on a per app basis for web or virtual apps on the Workspace ONE launcher OR to login to Workspace ONE to view your launcher in the first place. The VMware Verify app is currently available on iOS and Android.VMware Verify supports 3 methods of authentication:

  1. OneTouch approval
  2. One-time passcode via VMware Verify app (soft token)
  3. One-time passcode over SMS

By using VMware Verify, security is increased since a successful authentication does not depend only on something users know (their passwords) but also on something users have (their mobile phones), and for a successful break-in, attackers would need to steal both things from compromised users.

1. Configure VMware Verify

First you need to download and install “VMware Workspace ONE Access“, which is very simple to deploy using ova. VMware Verify is provided as-a-service, and thus, it does not require to install anything on-premise server. To enable VMware Verify, you must contact VMware support. They will provide you a security token which is all you need to enable the integration with VMware Workspace One Access (vIDM).Once you get the token, login into vIDM as an admin user and go to:

  1. Click on the Identity & Access Management tab
  2. Click on the Manage button
  3. Select Authentication Methods
  4. Click on the configure icon (pencil) next to VMware Verify
    1. undefined
  5. A new window will pop-up, on which you need to select the Enable VMware Verify checkbox, enter the security token provided by VMware support, and click on Save.
    1. undefined
    2. undefined

2. Create a Local Directory on VMware Workspace One Access

VMware Workspace ONE Access not only supports Active Directories , LDAP Directories but also supports other types of directories, such as local directories and Just-in-Time directories.For this Lab , i am going to create a local directory using local directory feature of Workspace One Access,Local users are added to a local directory on the service. we need to manage the local user attribute mapping and password policies. You can create local groups to manage resource entitlements for users.

  1. Select the Directories tab
  2. Click on “Add Directory”
    1. undefined
  3. Specify Directory and domain name (this is same domain name i have registered for VMware Verify
    1. undefined

3. Create/Configure a built-in Identity Provider

Once the second authentication factor is enabled as described on steps 1 and 2, it must next be added as an authentication method to a Workspace one access built-in provider. If in your environment already exists one, you can re-configure it. Alternatively, you can create a new built-in identity provider as explained below.Login to Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Identity Providers link
  4. Click on the Add Identity Provider button and select Create Built-in IDP
    1. undefined
  5. Enter a name describing the Identity Provider (IdP)
  6. Which users can authenticate using the IdP – In the example below I am selecting the local directory that i had created above.
  7. Network ranges from which users will be directed to the authentication mechanism described on the IdP
  8. The authentication methods to associate with this IdP – Here I am selecting VMware Verify as well as Local Directory.
  9. Finally click on the Add button
    1. undefined

4. Update Access Policies on Workspace One Access

The last configuration step on Workspace One Access (vIDM) is to update the default access policy to include the second factor authentication mechanism. For that, login into Workspace One Access as an admin user and then:

  1. Select the Identity & Access Management tab
  2. Click on the Manage button
  3. Click on the Policies link
  4. Click on the Edit Default Policy button
  5. This will open up new page showing the details of the default access policy. Go to “Configuration” and Click on “ALL RANGES”.

A new window will pop-up. Modify the settings right below the line “then the user may authenticate using:”

  1. Select Password as the first authentication method – This way users will have to enter their ID and password as defined on the configured Local Directory
  2. and Select second authentication mechanism. here I am adding VMware Verify – This will make that after a successful password authentication, users will get a notification on their mobile phones to accept or deny the login request.
  3. I am leaving empty the line “If preceding Authentication Method fails or is not applicable, then:” – This is because I don’t want to configure any fallback authentication mechanism or you can choose based on your choice.

5. Download the app in your mobile and register a user from an Cloud Director Organization

  1. Access the app provider on your mobile phone. Search for VMware Verify and download it.
    1. undefined
  2. Once it is downloaded, open the application. It will ask for your mobile number and e-mail address. Enter your domain details. On the screenshot below, I’m providing my mobile number and an e-mail which is only valid in my lab.After clicking OK, you will be provided two options for verifying your identity:
    1. undefined
  1. Receiving and SMS message – SMS will have registration code which will allow you to enter in to APP along with registration code.
    1. undefined
  2. Receiving a Phone Call – after clicking on this option, the app will show a registration code you will need to type on the phone pad once you receive the call
  3. Since i am using SMS way to doing it , it will ask you to Enter the code which you have received in SMS Manually (XopRcVjd4u2)
    1. undefinedundefined
  4. Once your identity has been verified, you will be asked to protect the app by setting a PIN number. After that, the app will show there are not accounts configured yet.
    1. undefinedundefined
  5. Click on Account and add the account
    1. undefinedundefined

Immediately after that, we will start receiving tokens on the VMware Verify mobile app. so at this moment, you are ready to move to the next step.

6. Enable VMware Cloud Director Federation with VMware Workspace ONE Access

There are three authentication methods that are supported by vCloud Director:

Local: local users which are created at the time of installing vCD or while creating any new organization.

LDAP service:  LDAP service enables the organisations to use their own LDAP servers for authentication. Users can then be imported into vCD from the configured LDAP.

SAML Identity Provider: A SAML Identity Provider can be used to authenticate users in organisations. SAML v2.0 metadata is required for the service to be configured. The metadata must include the location of the single sign-on service, the single logout service, and the X.509 certificate for the service. In this post we will be using federation between VMware Workspace One Access with VMware Cloud Director.

So, let’s go ahead and login to VMware Cloud Director Organization and go to “Administration” and Click on “SAML”

  1. Enable Federation by setting “Entity ID” to any other unique string , in this case i am setting “org name” , in my case my org name is “abc”
  2. Then click on “Generate” to generate a new certificate and click “SAVE”
    1. undefined
  3. Download Metadata from the link , It will download file “spring_saml_metadata.xml“. This activity can be performed by system or Org Administrator.
    1. undefined
  4. In VMware Workspace ONE Access(VIDM) admin console, go to “Catalog” and create new web application.
    1. Write application name, description and upload nice icon and choose category.
    1. undefined
  5. In the next screen keep Authentication Type SAML 2.0 and paste the xml metadata downloaded in step #1 into the URL/XML window. Scroll down to Advanced Properties. 
    1. undefined 
  6. In Advanced Properties we will keep the defaults but add Custom Attribute Mappings which describe how VIDM user attributes will translate to VCD user attributes. Here is the list:
    1. undefined
  7. Now we can finish the wizard by clicking next, select access policy (keep default) and reviewing the Summary on the next screen.
    1. undefined
  8. Next we need to retrieve metadata configuration of VIDM – this is by going back to Catalog and clicking on Settings. From SAML Metadata download Identity Provider (IdP) metadata.
    1. undefined
  9. Now we can finalize SAML configuration in vCloud Director. on Federation page Toggle Use SAML Identity Provider button to enable it and import the downloaded metadata (idp.xml) with Browse and Upload buttons and click Apply.
    1. undefined
  10.  we first need to import some users/groups to be able to use SAML. You can import VMware Workspace ONE Access(VIDM) users by their user name or group. We can also assign role to the imported user.
    1. undefined

This completes the federation process between VMware Workspace ONE Access (VIDM) and VMware Cloud Director. For More details you can refer This Blog Post.

Result – Cloud Director Two Factor Authentication in Action

Lets your tenant go to browser and browse their tenant URL, they will get atomically redirected to VMware Workspace ONE Access page for authentication:

  1. User enters user name and password and if user get successfully authenticated , if moves to 2FA
  2. on the next step, user gets a notification on thier mobile phones
    1. undefined
  3. Once user approves the authentication on the phone , VMware Workspace ONE Access allows access to user based on the role given on VMware Cloud Director.

On-Board a New User

  1. Create a new User in VMware Workspace and also add him to application access.
    1. undefined
    2. undefined
  2. User gets an email to setup his/her password, user must configure his/her password.
    1. undefined
  3. Administrator login to Cloud Director and Import newly created user from SAML with a Cloud Director role
    1. undefined
    2. undefined
  4. User browses cloud URL and after user logs in to portal with user id and password, he/she asked to provide mobile number for second factor authentication.
    1. undefinedundefined
  5. After entering mobile number , if user has installed “VMware Verify” app , he/she get notification for Approve/Deny or if app has not been installed , click on “Sign in with SMS” , user will receive an SMS , enter that SMS for second factor authentication.
    1. undefinedundefinedundefined
  6. Once user enters the passcode received on his/her cell phone, VMware Workspace One Access allow user to login to cloud director.
    1. undefined

This completes the installation and configuration of VMware Verify with VMware Cloud Director. you can add additional things like branding of your cloud etc.. which will give this your cloud identity.

Upgrade Tanzu Kubernetes Grid

Featured

Tanzu Kubernetes Grid make it very simple to upgrade Kubernetes clusters , without impacting availability of control plane and also ensures rolling update for worker nodes.we just need to run two commands to upgrade Tanzu Kubernetes Grid: “tkg upgrade management-cluster” and tkg upgrade cluster CLI commands to upgrade clusters that we deployed with Tanzu Kubernetes Grid 1.0.0. In this blog post we will upgrade Tanzu Kubernetes Grid from version 1.0.0 to 1.1.0

Pre-Requisite

In this post we are going to upgrade TKG from version 1.0.0 to version 1.1.0 and to start upgrading we need to download the new versions of “TKG” client cli , Base OS Image Template and API server load balancer.

  1. Download and Install new version of “TKG” cli on your client VM
    1. Download and upload For Linux – VMware Tanzu Kubernetes Grid CLI 1.1.0 Linux.
    2. Unzip using
      1. #gunzip tkg-linux-amd64-v1.1.0-vmware.1.gz
    3. unzip file named will be tkg-linux-amd64-v1.1.0-vmware.1
    4. move and rename to “tkg” using
      1. #mv ./tkg-linux-amd64-v1.1.0-vmware.1 /usr/local/bin/tkg
    5. Make it executable using
      1. #chmod +x /usr/local/bin/tkg
    6. Run #tkg version , this should show updated version of “tkg cli” client command line
    7. undefined
  2. Download the new version
    1. Download OVA for Node VMs: photon-3-kube-v1.18.2-vmware.1.ova
    2. Download OVA for LB VMs: photon-3-haproxy-v1.2.4-vmware.1.ova
    3. Once download,Deploy these OVA using “Deploy OVF template” on vSphere
    4. When OVA deployment completed, right click the VM and select “Template” and click on “Convert to Template”

Upgrade TKG Management Cluster

As you know management cluster is a purpose-built for operating the Tanzu platform and managing the lifecycle of Tanzu Kubernetes clusters. so we need to first upgrade Tanzu Management Cluster before we upgrade our Kubernetes clusters.This is the most seamless upgrade of Kubernetes cluster, i have ever done, so let’s get in to:

  1. First list TKG management cluster which are running in my environment using, command will display the information of tkg management clusters
    1. #tkg get management Cluster
    2. undefined
  2. Once got the name of management cluster , run below command to proceed with upgrade
    1. #tkg upgrade management-cluster <management-cluster-name>
    2. undefined
    3. Upgrade process first upgrades the Cluster API providers for vSphere, then it upgrades the K8s version in control plane and worker node of management cluster.
    4. undefined
  3. if everything goes fine, it should take less than 30 minutes to complete the upgrade process of management plane.
    1. undefined
  4. Now if you run #tkg get cluster –include-management-cluster , it should show upgraded version of management cluster.
    1. undefined

Upgrade Tanzu Kubernetes Cluster

Now since our management plane is upgraded. lets go ahead and upgrade Tanzu Kubernetes Clusters.

  1. Here also process is same as we did for management cluster run below command to proceed with upgrade
    1. #tkg upgrade cluster <cluster-name>
    2. if the cluster is not running in”default” namespace, specify “–namespace” option (when TKG is part of vSphere7 with Kubernetes)
    3. the upgrade process will upgrade Kubernetes version across your control and worker virtual machines.
    4. Once done , you must see successful upgrade of Kubernetes clusters.
  2. Now login with your credential and see the Kubernetes version your cluster is running.

This completes the upgrade process, such an easy process of upgrading Kubernetes clusters without impacting availability of cluster.

Configuring Ingress Controller on Tanzu Kubernetes Grid

Featured

Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour supports dynamic configuration updates and multi-team ingress delegation out of the box while maintaining a lightweight profile.In this blog post i will be deploying Ingress controller along with Load Balancer (LB was deployed in this post).you can also expose Envoy proxy as node port which will allow you to access your service on each k8s node.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting

13

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Here on How to Deploy TKG)
  • kubectl configured with admin access to your cluster
  • You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions which can be downloaded from here

Install Contour Ingress Controller

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer.To install Contour follow below steps:

  • Downloaded VMware Tanzu Kubernetes Grid Extensions Manifest 1.1.0 in the pre-requisite stage ,move that to your Client VM and unzip it.
    • 1
  • You deploy Contour and Envoy directly on Tanzu Kubernetes clusters. You do not need to deploy Contour on management clusters.
  • Set the context of kubectl to the Tanzu Kubernetes cluster on which to deploy Contour.
    • #kubectl config use-context avnish-admin@avnish
  • First Install Cert-Manager on the k8 cluster
    • kubectl apply -f tkg-extensions-v1.1.0/cert-manager/
    • 2
  • Deploy Contour and Envoy on the cluster using:
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/vsphere/
    • 3

This completes installation of Contour Ingress Controller on Tanzu Kubernetes Cluster.let’s deploy an application and test the functionality.

Deploy a Sample Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the application

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is available within the same folder which we have downloaded from VMware inside example folder. Let’s deploy the application:

  • Run below command to deploy application which will create a new namespace named “test-ingress” , 2 services and one deployment.
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/common
    • 4

Very simple way of installing the application, now lets create Ingress resource.

Create Ingress Resource

Let’s imagine a scenario where the “foo” team owns http://www.foo.bar.com/foo and “bar” team owns http://www.foo.bar.com/bar. considering this scenario:

  • Here is Ingress Resource Definition for our example application:
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: https-ingress
        namespace: test-ingress
        labels:
          app: hello
      spec:
        tls:
        - secretName: https-secret
          hosts:
            - foo.bar.com
        rules:
        - host: foo.bar.com
          http:
            paths:
            - path: /foo
              backend:
                serviceName: s1
                servicePort: 80
            - path: /bar
              backend:
                serviceName: s2
                servicePort: 80
  • Lets deploy it using below command:
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/https-ingress
    • 5
    • Check the status and grub the External IP address of Contour “envoy” proxy.
    • 8
    • Add an /etc/hosts entry to above IP addresses to foo.bar.com

Test the Application

To access the application, browse the foo and the bar services from your desktop which has access to service network.

  • if you browse bar, you will get bar service responding
    • 9
  • if you browse foo, you will get foo service responding
    • 7

This completes the installation and configuration of Ingress on VMware Tanzu Kubernetes Grid K8 cluster. Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here and when customer chooses to Taznu portfolio , they get Contour as supported version from VMware.

Ingress on Cloud Director Container Service Extension

In this blog post i will be deploying Ingress controller  along with Load Balancer (LB was deployed in previous post) in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

11

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Deployment Options for provider specific details)
  • kubectl configured with admin access to your cluster
  • RBAC must be enabled on your cluster

Install Contour

To install Contour, Run:

  • #$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  • 1

This command creates:

  • A new namespace projectcontour
  • A Kubernetes Daemonset running Envoy on each node in the cluster listening on host ports 80/443
  • Now we need to retrieve the external address of the load balancer assigned to Contour by our Load Balancer that we deployed in previous port. to get the LB IP run this command:
    • 2
  • “External IP” is of the range to IP addresses that we had given in LB config , we will NAT this IP on VDC Edge gateway to access this from outside or internet.
    • 3

Deploy an Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is hosted at Github , can be downloaded from Here. Once downloaded:

  • Create the coffee and the tea deployments and services using
  • Create a secret with an SSL certificate and a key
  • Create an Ingress resource

This completes the deployment of the application.

Test the Application

To access the application, browse the coffee and the tea services from your desktop which has access to service network. you will also need to add hostname/ip in to /etc/hosts file or your DNS server

  • To get Coffee:
    • 8
  • If your prefer Tea:
    • 10

This completes the installation and configuration of Ingress on VMware Cloud Director Container Service Extension, Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here.

Load Balancer for Cloud Director Container Service Extension

In this blog post i will be deploying Load Balancer in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is LB in Kubernetes ?

To understand load balancing on Kubernetes, we must need to understand some Kubernetes basics:

  • A “pod” in Kubernetes is a set of containers that are related in terms of their function, and a “service” is a set of related pods that have the same set of functions. This level of abstraction insulates the client from the containers themselves. Pods can be created and destroyed by Kubernetes automatically, Since every new pod is assigned a new IP address, IP addresses for pods are not stable; therefore, direct communication between pods is not generally possible. However, services have their own IP addresses which are relatively stable; thus, a request from an external resource is made to a service rather than a pod, and the service then dispatches the request to an available pod.

An external load balancer will apply logic that ensures the optimal distribution of these requests. In order to create Load Balancer, your clusters must be deployed in to cloud director based cloud and follow below steps to configure load balancer for your kubernetes cluster:

                                                 18

METALLB Load Balancer

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters. for more information on this, refer here

Pre-requisite

MetalLB requires the following pre-requisite to function:

  • A CSE Kubernetes cluster, running Kubernetes 1.13.0 or later.
  • A cluster network configuration that can coexist with MetalLB.
  • Some IPv4 addresses for MetalLB to hand out.
  • Here is my CSE cluster info , this cluster i will be using for this Demo:
    • 1

Metallb Load Balancer Deployment

Metallb deployment is very simple three step process, follow below steps:

  • Create a new namespace as below:
    • #kubectl create ns metallb-system
  • Below command will deploy MetalLB to your cluster, under the metallb-system namespace.
    • #kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
    • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
    • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
    • 3
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
    • #kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
    • 4

NOTE– I am accessing my CSE cluster using NAT , thats the reason i am using “–insecure-skip-tls-verify”

Metallb LB Layer2 Configuration

The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The specific configuration depends on the protocol you want to use to announce service IPs. Layer 2 mode is the simplest to configure and in many cases, you don’t need any protocol-specific configuration, only IP addresses.

  • Following configuration map gives MetalLB control over IPs from 192.168.98.220 to 192.168.98.250, and configures Layer 2 mode:
    • apiVersion: v1
      kind: ConfigMap
      metadata:
        namespace: metallb-system
        name: config
      data:
        config: |
          address-pools:
          - name: default
            protocol: layer2
            addresses:
            - 192.168.98.220-192.168.98.250

This completes installation and configuration of load balancer, let’s go ahead and publish the application using kubernetes “type=LoadBalancer”.  cse and metallb will take everything.

Deploy an Application

Before deploying application, i want to show the my cloud director network topology where these container workload is getting deployed and kubernetes services are getting created.So here we have one Red segment (192.168.98.0/24)  for container workload where CSE has deployed kubernetes worker nodes and on same network we have deployed our “Metallb” load balancer.

Kubernetes pods will be created on weave networks , which is internal software defined networking for CSE and services will be exposed using Load Balancer which is configured with “OrgVDC” network.

17

let’s get started ,so we are going to use “guestbook” application, which uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. let’s go ahead and deploy it:

NOTE – All above steps in detail has been covered in Kubernetes.io website – here is Link

Accessing Application

To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:

  • #kubectl get services
    • 14
  • Go back to your organization VDC and create a NAT rule, so that service can be access using routable/public IP.
    • 16
  • Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
    • 15

Very easy and simple process to deploy and access your containerised applications which  are running on most secure and easy VMware Cloud Director. go ahead and start running containerised apps with upstream kubernetes on Cloud Director.

Stay tuned!!! , in next post i will be deploying Ingress on CSE cluster.

Installing Tanzu Kubernetes Grid

This blog post helps you to create Tanzu Kubernetes Grid  Clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 infrastructure.

NOTETanzu Kubernetes Grid Plus is the only supported version on VMware Cloud on AWS. you can deploy Kubernetes clusters on your VMC clusters using Tanzu Kubernetes Grid Plus. Please refer to KB 78173 for detailed support matrix. 

Pre-requisite

On your vSphere/VMware Cloud on AWS instance ensure that you have the following objects in place:

  •  A resource pool in which to deploy the Tanzu Kubernetes Grid Instance (TKG)
  • A VM folder in which to collect the Tanzu Kubernetes Grid VMs (TKG)
  • Create a network segment with DHCP enabled
  • Firewall rules on compute segment.
  • This is not must but i prefer a Linux based Virtual Machine called “cli-vm” which we will be using as a “Bootstrap Environment” to install Tanzu Kubernetes Grid CLI binaries for Linux

Installing Kubectl

Kubectl is a command line tool for controlling Kubernetes clusters that we will use for managing/controlling k8 cluster deployed by TKG . To install latest version of Kubectl follow below steps on “cli-vm”:

#curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
#chmod +x kubectl
#mv kubectl /usr/local/bin

1

Installing Docker

Docker is a daemon-based container engine which allows us to deploy applications inside containers, since my VM is Cent OS , i started the docker service , to start docker service , follow below steps on cli-vm:

#systemctl start docker
#systemctl enable docker

To view the status of the daemon, run the following command:

#systemctl status docker

Install Tanzu Kubernetes Grid CLI

To use Tanzu Kubernetes Grid, you download and run the Tanzu Kubernetes Grid CLI on local system, in our case we will install on our “cli-vm”

  • Get the tkg cli binary from GA build page like:
    • For Linux platforms, download tkg-linux-amd64-v1.0.0_vmware.1.gz.
    • For Mac OS platforms, download tkg-darwin-amd64-v1.0.0_vmware.1.gz.
  • Unzip the file:
    • gunzip ./tkg-linux-amd64-v1.0.0_vmware.1.gz
    • Copy the unzip binary to /usr/local/bin:
    • chmod +x . /tkg-linux-amd64-v1.0.0_vmware.1
    • mv ./tkg-linux-amd64-v1.0.0_vmware.1 /usr/local/bin/tkg
  • Check tkg env is ready:
    • # tkg version
      Client:
      Version: v1.0.0
      Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e
  • /root/.tkg folder will be auto created for tkg config file

4

Create an SSH Key Pair

In order for Tanzu Kubernetes Grid VMs to run tasks in vSphere, you must provide the public key part of an SSH key pair to Tanzu Kubernetes Grid when you deploy the management cluster. You can use a tool such as ssh-keygen to generate a key pair.

  • On the machine on which you will run the Tanzu Kubernetes Grid CLI, run the following ssh-keygen command.
    • #ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt
    • Enter file in which to save the key (/root/.ssh/id_rsa):
    •  press Enter to accept the default.
  • Enter and repeat a password for the key pair.
  • Open the file .ssh/id_rsa.pub in a text editor copy and paste it when you deploy the management cluster.

Import OVA & Create Template in VC

Before we can deploy a Tanzu Kubernetes Grid management cluster or Tanzu Kubernetes clusters to vSphere, we must provide a base OS image template to vSphere. Tanzu Kubernetes Grid creates the management cluster and Tanzu Kubernetes cluster node VMs from this template. Tanzu Kubernetes Grid provides a base OS image template in OVA format for you to import into vSphere. After importing the OVA, you must convert the VM into a vSphere VM template

  • Export TKG needs 2 ova, which are photon-3-v1.17.3+vmware.2.ova and photon-3-capv-haproxy-v0.6.3+vmware.1.ova.
  • Convert above both ova vms to templates and put them to vm folder.
  • Here are high level steps:
    • In the vSphere Client, right-click on cluster & select Deploy OVF template.
    • Choose Local file, click the button to upload files, and navigate to the photon-3-v1.17.3_vmware.2.ova file on your local machine.
    • Follow the on screen instruction to deploy VM from the OVA template.
      • choose the appliance name
      • choose the destination datacenter or folder
      • choose the destination host, cluster, or resource pool
      • Accept EULA
      • Select the disk format and datastore
      • Select the network for the VM to connect to
      • Click Finish to deploy the VM
      • Right-click on the VM, select Template and click on Convert to Template

    • Follow the same step for proxy OVA – photon-3-capv-haproxy-v0.6.3+vmware.1.ova

Installing TKG Management Cluster

Once pre-work is done , follow below steps to create Tanzu Management cluster:

  • On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the following command
    • #tkg init –ui
  • if your cli vm running X11 desktop then it will open browser with loop back IP , if not you can setup using putty on your windows desktop , like this:
    • 27
  • Once you have successfully opened the connection, open you web browser and navigate to http://127.0.0.1:8081 and you should be seeing below page
    • 5
  • Enter your IAAS provider details where TKG can create K8s cluster and in this case we need to enter VMC vCenter Server information and then click “Connect” button, you will get notification etc accept all and then “connect button” becomes “connected” from here you just need to select the details about where do you want to deploy TKG, here fill the Datacenter and SSH Key which we created in previous steps.
    • 67
  • Select Development or Production Flavour and specify an instance type, then give the K8s Management Cluster a name and select API server Load Balancer (specify the HA Proxy VM Template, which we have uploaded in previous step)
    • 8
  • Select Resource Pool(TKG), VM Folder(TKG) and WorkloadDatastore
    • 9
  • Select Network Name and leave the other as default
    • 10
  • Select the K8s PhotonOS Template , this is the VM template that we uploaded in previous steps.
    • 11
  • Review all settings to ensure they match as per your selection and then click on “Deploy Management Cluster” , which will begin the deployment process….
    • 12
  • This takes around 5 to 6 minutes to complete the entire process of setting up TKG management cluster and once the Management Cluster has been deployed, you can go close the web browser and go back to your SSH session and stop the TKG UI rendering.
    • 1314
  • you can verify that all pods are up & running by running the command:
    • #kubectl get pods -A
    • 15

Deploy Tanzu Kubernetes workload Cluster

so let’s deploy Tanzu Kubernetes Cluster as  we have completed Tanzu Kubernetes grid Management cluster, we can use the “TKG” CLI to deploy Tanzu Kubernetes clusters from your management cluster to vSphere/VMware Cloud on AWS.Run the following command to deploy TKG Cluster called “avnish” or any other name you wish to use.

  • #tkg create cluster avnish –plan=dev

above command will deploy a Tanzu Kubernetes cluster with the minimum default configuration.I am also deploying another cluster by specifying few more parameters

Deploy High Available Kubernetes Cluster

This command will deploy high available kubernetes cluster.

  • #tkg create cluster my-cluster –plan dev –controlplane-machine-count 3 –worker-machine-count 5
    • 16
  • Once the TKG Cluster is up and running , we can run the following commands to get cluster information and credentials:

    • #tkg get clusters
    • #tkg get credential <clustername>
    • 17
  • To get the context of TKG clusters , run normal Kubernetes commands:
    • #kubectl config get-contexts
    • #kubectl config use-context avnish-admin@avnish
    • 1819
  • To get the kubernetes node details , run kubernetes commands…
    • 20

Scale TKG Clusters

After we create a Tanzu Kubernetes cluster, we can scale it up or down by increasing or reducing the number of node VMs that it contains.

  • Scale up – Scale a cluster, use the
    • #tkg scale cluster
    • to change the number of control plane nodes  –controlplane-machine-count 
    • to change the number of worker nodes –worker-machine-count
    • 21
    • 22
    • 23

Installing Octant on TKG Clusters

Octant is an open source developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster and its applications, it is simple to install and easy to use.

  • Here is  more information on Installation and configuration
  • you need to install octant on your cli VM and then proxy it so that you can open its web interface.
    • octant2octant1

This completes the installation and configuration of Tanzu Kubernetes grid, once you have management cluster ready, go ahead and deploy containerised applications on these clusters. TKG gives lots of flexibility of deploying , scaling and managing multiple TKG workload cluster and can be given based on department/projects.

Install and Configure Cloud Director App Launchpad

In continuation of my last post on the same topic, in this post we will deploy and configure Cloud Director App Launchpad.

With App Launchpad VMware cloud providers can now deliver their own catalog based applications or VMware Cloud Marketplace certified 3rd party Cloud Applications, and Bitnami catalog applications directly to customers through a simple catalog interface from a VMware Cloud Director plugin. This capability allows Cloud Providers to deliver application Platform as a Service to customers who needn’t know anything about the supporting infrastructure for the catalog applications they deploy.

NOTE – In this release Tenant using App Launchpad 1.0, can launch single-VM applications.

Prerequisites for App Launchpad Installation

Before we install and configure App Launchpad, it requires few external components and supports specific versions that you must deploy and configure.

  • Create a new Virtual Machine with below requirement
    • 14
  • Ensure Rabbit MQ is installed and configured under Cloud Director extensibility before deploying App Launchpad.
  • Inside same Rabbit MQ Server create a new Exchange with type as “direct” and a dedicated AMQP user that has full permissions to the virtual host of the AMQP broker.

    • This slideshow requires JavaScript.

Install Cloud Director App Launchpad

Deployment of  App Launchpad can be done by installing an RPM package on a dedicated Linux virtual machine.Download Application Launchpad from here and transfer the file to ALP server and installation is very simple process:

  • Open an SSH connection to the installation target Linux virtual machine and log in by using a user account with sufficient privileges to install an RPM package.
  • Install the RPM package by running the installation command.
    • yum install -y vmware-vcd-alp-1.0.0-1593616.x86_64.rpm
    • 16

Connect App Launchpad with Cloud Director

To configure App Launchpad with Cloud Director, we will use the alp command line utility. By using this utility:

  • We will establish a connection between App Launchpad and VMware Cloud Director
  • Define or create the App-Launchpad-Service account
  • and install the App Launchpad user interface plug-in for VMware Cloud Director.
  • The alp connect command also configures App Launchpad with your AMQP broker.
  • #alp connect --sa-user alpadmin --sa-pass <PASSWORD> --url https://10.96.98.50 --admin-user admin@system --admin-pass <PASSWORD> --amqp-user alp --amqp-pass <PASSWORD> force --amqp-exchange alpext
  • Accept “EULA” and “certificate”
    • 17
  • if you have put correct information then it should show successfully configured
  • Restart ALP service using
    • #systemctl restart alp
  • You can run #alp show to verify the connection
    • 18

Configure App Launchpad

  • Now you can go to Cloud Director and check installed ALP plugin in Cloud Director.
    • 5
  • Click on “LAUNCH SETUP” to configure it to offer Applications as a Service
  • If you want to configure the infrastructure for App Launchpad automatically, select Yes and software will setup everything automatically.
    • n14
  • In case you chosen “No i will set it up on my own”, pre-requisite you need to setup manually.
    • n15
  • App Launchpad supports the use of applications from the Bitnami applications catalog that is available in the VMware Cloud Marketplace.

    You can also create catalogs of your custom, in-house applications and configure App Launchpad to work with these catalogs.

  • Create sizing templates for the applications.
    1. Enter a name for the sizing template.
    2. Enter a vCPU count, a memory size (in GB), and a disk size (in GB)
  • To complete the initial configuration of App Launchpad, click Finish.
    • This slideshow requires JavaScript.

  • If everything is goes fine and have enough resources in Cloud Director , you will see “App Launchpad Setup Complete”
    •  19

Onboarding Bitnami Applications

VMware Cloud providers can import applications from the Bitnami applications catalog that is available in the VMware Cloud Marketplace. To begin, provider must log in to the VMware Cloud Marketplace and subscribe to the Bitnami application you wish to deploy. Follow these steps:

  • Log in to the VMware Cloud Marketplace.
  • From the “Catalog” page, find the Bitnami application you wish to deploy (With App Launchpad 1.0, tenant users can only run single-VM applications) and select it to subscribe.
  • On the “Settings” page, choose “VCD” as the platform and select the correct version. Set the subscription type to “BYOL”. Click “Next” to proceed.

    • This slideshow requires JavaScript.

  • The subscription will now be added to your Cloud Director App Launchpad organization , which tenants can use it.
  • Make sure the “App Launchpad” organization has right permission.
    • n16

Onboarding In-house Applications

Cloud Provider can also add your own in-house applications to the content library of the “AppLaunchpad" provider organization and upload your applications manually , to do so

  • Provider admin need to Navigate to the “Content Libraries -> vApp Templates” page and click on “NEW”..
    • t2
  • By default, in-house applications neither has logo nor has summary.
    • t3

To give these apps better user experience, service provider can set metadata on vApp templates by GUI or vCloud API , here is GUI Way to do so:

  • Go to Content Library and click on application which you have recently updated and go to metadata and click on “Edit” and add following items:
    • t4
    • title – Title of Application
    • summary – Summary of Application which will be displayed on Application tile.
    • Description – Description of Application
    • version – Displays version number of Application.
    • logo – Provider can choose a logo using Internal/External web location like S3.
    • screenshot – Provider can choose default snapshot using Internal HTTP/HTTPs server or External web location like S3
  • t5

This completes the installation and configuration of Cloud Director App Launchpad and as i said in my last post – App Launchpad is a free component for VMware Cloud Director, and doesn’t necessitate the use of Bitnami catalogs, providers can use their own appliances, so go ahead and give it a try, start delivering a PAAS like solutions to your Tenants.

 

 

 

 

VMware Cloud Director Encryption -PartIII

In the Part-1 & Part-2 we configured HyTrust KeyControl Cluster & vCenter, In this post we will configure Cloud director to utilize what we have configured till now…

Attach Storage Policy to Provider VDC

To update the information in the vCloud Director database about the VM storage policies which we had created in underlying vSphere environment, we must refresh the storage policies of the vCenter Server instance.

  • Login to Cloud Director with cloud admin account and go to vSphere resources and choose vCenter on which we had created policies and click on “REFRESH POLICIES”
    • 7
  • You can add a VM storage policy to a provider virtual data center, after which you can configure organization virtual data centers backed by this provider virtual data center to support the added storage policy.
    • Login to Cloud Director, go to Provider VDCs and choose PVDC which is backed by the cluster where we had created storage policies.
    • Click on “ADD”
    • 8
  • Choose the Policy that we created in previous post.
    • 9
    • 10

Attach Storage Policy to Organization VDC

You can configure an organization virtual data center to support a VM storage policy that you previously added to the backing provider virtual data center.

  • Now click on Organization VDCs, and click the name of the target organization virtual data center like 
    • 11
  • Click the Storage tab, and click Add.
    • 12
  • You can see a list of the available additional storage polices in the source provider virtual data center
  • Select the check boxes of one or more storage policies that you want to add, and click Add.
    • 13

Self Services Tenant Consumption

When Provider’s tenant try to create a VM/vAPP (A virtual machine can exist as a standalone machine or it can exist within a vApp) , he can use the encryption policy that we have created previously.

  • This is new VM creation wizard from template , Tenant user must choose “use custom storage policy” and select the “encryption policy”
    • 14
  • Once VM is provisioned , user can go and check the Storage policy by clicking on VM.
    • 15
  • User can also go in to “Hard Disk” section of VM and check disk policy.
    • 16

Encrypt Named Disks

Named disks are standalone virtual disks that you create in Organization VDCs.When you create a named disk, it is associated with an Organization VDC but not with a virtual machine. After you create the disk in a VDC, the disk owner or an administrator can attach it to any virtual machine deployed in the VDC. The disk owner can also modify the disk properties, detach it from a virtual machine, and remove it from the VDC. System administrators and organization administrators have the same rights to use and modify the disk as the disk owner.

  • Here we will create a new encrypted “Named Disk” by choosing storage policy as “Encryption Policy”.
    • 17
  • Cloud Director allow users to connect these named disks
  • Click the radio button next to the name of the named disk that you want to attach to a virtual machine, and click Attach
  • From the drop-down menu, select a virtual machine to which to attach the named disk, and click Apply.
    • 1819

This competes three part Cloud Director encryption configuration and use by the tenants , this features enables VMware Cloud Providers new offering and monetisation opportunities, go ahead , deploy and start offering additional/deferential services.

VMware Cloud Director Encryption -PartII

In the Part-1 we configured HyTrust KeyControl Cluster , In this post we will configure this cluster in vCenter and configure encryption for Virtual Machines. Let’s create a certificate on KMS server, which we will use to authenticate with vCenter.

Create Certificate

To create certificate , login to KMS server and go to KMIP

  • Click on “Client Certificates”
  • Then click on Actions and “Create Certificates”
    • 20
  • Enter the required details for creating certificate and click on create.
    • 21

Configure KMS with vCenter

  • Highlight the newly created certificate, click the Actions dropdown button, then click the Download Certificate option. This will download the certificate created above. A zip file containing the Certificate of Authority (CA) and certificate will be downloaded.
    • 22
  • Once you have downloaded Certificate , Log in to the VCSA, highlight the vCenter on the left hand pane, click on the configure tab on the right hand pane, click on Key Management Servers, then click the Add KMS button.
    • 23
  • Enter a Cluster name, Server Alias, Fully Qualified Domain Name (FQDN)/IP of the server, and the port number. Leave the other fields as the default, then click OK.
    • 26

Enable Trust between vCenter and KMS

  • Now we have to establish the trust relationship between vCenter and HyTrust KeyControl. Highlight the KeyControl appliance and click on Establish trust with KMS.
    • 27
  • Select the Upload certificate and private key option, then click OK.
    • 28
  • Click on Upload file button , browse to where the CA file was previously generated, select the “vcenter name”.pem file, then click Open.
  • Repeat the process for the private key by clicking on the second Upload file button and Verify that both fields are populated with the same file, then click OK.
    • 29
  • You will now see that the Connection status is shown as Normal indicating that trust has been established. Hytrust KeyControl is now set up as the Key Management Sever (KMS) for vCenter.
    • 25
  • Now we successfully add one Node of cluster , add another node by following the same steps..

Create Tag Category, Tag & Attach to Datastore

 Now we need to “Tag” few data stores which will hold these encrypted VMs , please create a “Tag Category” and a “Tag” in the vcenter and tag the data stores with this “Tag”.

This slideshow requires JavaScript.

Create Storage Profile

  • log into vCenter > Home > Policies and Profiles > VM Storage Policies > Create VM Storage Policy > Give it a name > Next
  • Select “Enable host based rules” and select “Enable tag based placement rules”
  • Select “Storage Policy Component” and choose “Default Encryption Properties”.The default properties are appropriate in most cases. You need a custom policy only if you want to combine encryption with other features such as caching or replication.
  • Select “Tag Category” and choose Appropriate Tag.
  • View Data Stores,review the configuration and finish.

This slideshow requires JavaScript.

This completes vCenter Configuration, in the next post will be configuring cloud director to consume these policies and tenant will use these policies.

 

VMware Cloud Director Encryption- Part1

Latest Cloud Director 10.1 release adds support for VM Encryption using cloud director self service portal, this means it allow users to encrypt/decrypt VMs and disks via Cloud Director, view the encryption status of VMs and disks in the API as well as user interface. Some of the key features are:

  • Ability to encrypt VMs at rest through Cloud Director UI and API
  • Cloud Providers configure Key Management Service (KMS), and encryption policy in backend vSphere
  • Cloud Providers can choose to make VM encryption available for some or all tenant
  • Tenant users can choose to apply encryption policy to VMs or individual disks.
  • In case of Tenant Managed Dedicated vCenter then Tenant can manages Keys and VM Encryption

I am going to write three part blog posts , which will cover:

  • VMware Cloud Director Encryption – PartI
  • VMware Cloud Director Encryption – PartII
  • VMware Cloud Director Encryption – PartIII

Deploy KMS

With HyTrust KeyControl supports a fully functional KMIP server that can be deployed as a vSphere Key Management Server and once deployment is completed and a trusted connection between KeyControl and vSphere has been established, KeyControl can manage the encryption keys for virtual machines in the cluster that have been encrypted with vCenter Server for vSphere Virtual Machine Encryption or VMware VSAN Encryption.

In this post we will deploy HyTrust KeyControl KMS server and setup KMS Cluster..There are two methods for installation of Key Control… either we can use OVA appliance or another Method to use ISO. in this Post we will use OVA method..

  • Open your vSphere Web Client and Click on “Deploy OVF Template”.
    • 1.png
  • Choose OVF
    • 2
  • Provide Name for the HyTrust KeyControl Appliance, select a deployment location, then click Next.
    • 3.png
  • Select the vSphere cluster or host Where you would like to install the HyTrust KeyControl appliance on, then click Next.
    • 4.png
  • Review the details, then click Next.
    • 5.png
  • Select the proper configuration from the drop down menu, then click Next. ( i am using Demo as resources are less in my Lab)
    • 6.png
  • Select the preferred storage and disk format for the KeyControl appliance, then click Next.
    • 7.png
  • Select the appropriate network, enter appropriate network details then click Next.
    • 9.png
  • Review the summary screen, if everything is correct, click Finish.
    • 10.png
    • 11

Appliance deployment is successfully completed.. since i am going to setup a cluster , so i would go ahead and deploy another appliance using the same procedure

Configure KMS Cluster

Once Both the appliance has been deployed ,

  • Power on the newly created HyTrust KeyControl appliance.then open a console to the KeyControl appliance. Set the system password, then press OK.
    • 12.png
  • Since this is the First Node ,Select No, then press enter.
    • 13.png
  • Review the Appliance Configuration, then press OK.
    • 14.png
  • Now First KeyControl appliance is configured and you can now move to the KeyControl WebGUI. Open a web browser and navigate to the IP or FQDN of the KeyControl appliance. Use the the following credentials to initially log in:
    • Username: secroot
      Password: secroot
    • 15.png
  • After login , read and accept the EULA by clicking on I Agree at the bottom of the agreement.
    • 16.png
  • Enter a new password for the secroot account, then click Update Password.
    • 17
  • Now we successful setup our first Node..
    • 18
  • Power on the second appliance and follow all the steps as above , except.. Click “YES” Here.
    • 19.png
  • This will take us to the process of Cluster creation process..
    • 20.png
  • Enter the IP address of First Node.
    • 21.png
  • The final piece of information required is the passphrase. We would require a minimum of 16 characters.
    • 25.png
  • The node must now be authenticated through the webGUI, as the following message indicates:
    • 23
  • At this point you need to log on to the webGUI console of First Node with Administration privileges. The new KeyControl node will automatically appear as an unauthenticated node in the KeyControl cluster, as shown below:
    • 26.png
  • To authenticate this new node, click the Actions Button and then click Authenticate. This will take you to the authentication screen shown below. You are prompted to enter the Authentication Passphrase.
    • 2728.png
  • On the new KeyControl’s console, you will see a succession of status messages, as shown below:
    • 30.png
  • Once authentication completes, the KeyControl node is listed as Authenticated but Unreachable until cluster synchronisation completes and the cluster is ready for use. This should not take more than a minute or two. Then it will show as Authenticated and Online.Once the KeyControl node is available, the status will automatically move to Online and the cluster status at the top right of the screen will change back to Healthy.
    • 31.png

At this point, the new cluster/node is ready to use.

Enable KMS Service

  • Now Click on the KMIP button on the toolbar to configure the KMIP.
    • 32.png
  • Enable KMIP by changing the state from disabled to enabled, then click save, then click Apply.
    • NOTE: Take note of the port number 5696 and have it handy. You will specify this port number in the vCenter\VCSA configuration, later on.
      33.png
  • Now we have successfully setup KMS Cluster.
    • 34.png

This completes process of KMS server installation , their configuration and KMS cluster creation and configuration. In the next post , we will use this cluster for vSphere to use as KMS server.

Deliver Applications as a Service on Cloud Director with App Launchpad

Cloud Director App Launchpad helps VMware Cloud Providers to offer their tenants a curated portfolio of applications for their consumption, without them having to know anything about VMware Cloud Director based? with the release of App Launchpad Cloud providers can elevate their portfolio from IAAS,CAAS to Applications as a Service.Cloud Provider can Offer in-house applications suited to verticals or solution areas, App Launchpad will help providers in offering all this  and also making it very easy for all customer personas like DevOps, Developers, IT admins to access and deploy applications to VMware Cloud Director

App Launchpad For Providers

VMware Cloud Providers can  configure App Launchpad to work with the following types of applications:

  • Bitnami Applications

    • Bitnami offers pre-configured, tested, and supported open source applications. Service Providers that subscribe to the Bitnami Community Catalog can access these applications from the VMware Cloud Marketplace also.
    • n8
  • Apps from VMware Cloud Marketplace

    • VMware Cloud Marketplace is a service that will allow our partners to easily publish solutions in a variety of formats – whether it’s containers or appliances or even SaaS, it  offers a range of ISV applications that a Service Provider can add to VMware Cloud Director and make available for consumption to their tenants using App Launchpad.
    • n9
  • In-house applications

    • Service provider can upload their own in house developed application vApps and can make it available to their tenants to consume with in seconds. providers will upload their own solution in AppLaunch Pad catalog and using an API call update catalogs description and logo and make it available to their choice of tenants using App LaunchPad.

Provider Onboarding Cloud Market Place Applications

To onboard Bitnami applications to App Launchpad, Cloud providers will have go to VMware Cloud Market Place and import the applications to the newly created Launchpad provider organization.

This slideshow requires JavaScript.

Provider Onboarding In-House Applications

To onboard custom, in-house applications, Providers will have to go to the content library of the newly created Launchpad provider organization and create catalog items and upload their applications. Provider will also need to add logo and description to these applications using an API, please refer code.vmware.com for API calls.

Provider Tenant Access & Application Management

To make applications available to tenants, App Launchpad automatically creates a catalog of applications and publish it to a VMware Cloud Director organization. Provider can also configure the default application deployment settings at the VMware Cloud Director organization and organization virtual data center levels.

Using App Launchpad, Providers can control the visibility of application catalogs to tenant users , can define various T-Shirt sizes for the application, control the visibility of catalogs.

Provider can remove an application catalog from a VMware Cloud Director organization, the users in the organization can no longer use the applications in the catalog.

This slideshow requires JavaScript.

App Launchpad For Tenants

Using App Launchpad, Various Tenant personas like developers, IT admins , End users and DevOps engineers can launch applications to their organization virtual data center  in few seconds and start consuming immediately. Few of the key features are:

  • Curated catalog of applications for tenants
    • n3
  • 1-Click app Deployment for tenant users
    • n6
  • Automates VM creation, networking, firewalling, assigns IP , Tenant user does not need to worry about underlying infrastructure required to provision and access apps, just need to go to App LaunchPad – My Applications and here they can get access information and basic operations like:
    • Open Console
    • IP address
    • Actions like – power on/off , delete etc..
    • n7
  • For tenant consumers No knowledge of underlying infrastructure required to provision and access apps.

How to Start ?

It is very simple and easy process to install App launchpad , here is App Launch Pad Installation Pre-requisite:

n2

NOTE – Linux Based Operating Cent OS 7 & Cent OS 8 are only Supported as of now.

For Detailed installation steps , please refer App Launchpad documentation here, i will also write few more posts on this topic.

App Launchpad is a free component for VMware Cloud Director, and doesn’t necessitate the use of Bitnami catalogs, providers can use their own appliances, so go ahead and give it a try, start delivering a PAAS solutions to your customers.

 

 

Quick & Easy Tenant OnBoarding using Cloud Director Terraform Provider

In continuation to my last two posts on using Terraform to automate various Cloud Director options, here is another one…In this post we are going to onboard a tenant using vCloud Director Terraform provider , (last post i did 5 steps) there are 12 steps that we are going to automate.Special thanks to my terraform for vCD Product Team who helped me in some of this stuff. 

Here is my old posts on similar topic.

unnamed

  1. Create a new External Network
  2. Create a new Organization for the Tenant
  3. Create a new Organization Administrator for this Tenant
  4. Create a new Organization VDC for the Tenant
  5. Deploy a new Edge gateway for the Tenant
  6. Create a new Routed Network for the Tenant
  7. Create a new Isolated Network for the Tenant
  8. Create a new Direct Network for the Tenant
  9. Create Organization Catalog
  10. Upload OVA/ISO to Catalog
  11. Creating vApp
  12. Create a VM inside vAPP

Step-1: Code for Creating External Network

As you know External Network is a Tenant connection to the outside world, By adding an external network, you register vSphere network resources for vCloud Director to use. You can create organization VDC networks that connect to an external network. few important parameters to consider:

  • Resource Type – “vcd_external_network”
  • vsphere_network – This is Required parameter you need to provide a DV_PORTGROUP or Standard port group names that back this external network. Each referenced DV_PORTGROUP or NETWORK must exist on a vCenter server which is registered with vCloud Director.
  • Type
    • For dv Port Group , use Type – DV_PORTGROUP
    • For Standard Port Group , use Type – NETWORK
  • retain_net_info_across_deployments – (Optional) Specifies whether the network resources such as IP/MAC of router will be retained across deployments. Default is false.
#Create a new External Network for "tfcloud"
resource "vcd_external_network" "extnet-tfcloud" {
  name        = "extnet-tfcloud"
  description = "external network"
  vsphere_network {
    vcenter = "vcsa.dp-pod.zpod.io" #VC name registered in vCD
    name    = "VM Network"
    type    = "NETWORK"
  }
  ip_scope {
    gateway    = "10.120.30.1"
    netmask    = "255.255.255.0"
    dns1       = "10.120.30.2"
    dns2       = "8.8.4.4"
    dns_suffix = "tfcloud.org"
    static_ip_pool {
      start_address = "10.120.30.3"
      end_address   = "10.120.30.253"
    }
  }
  retain_net_info_across_deployments = "false"
}

Step-2: Code for New Organization

In this section , we are going to create a new organization named “tfcloud” which is enabled to use, This section creates a new vCloud Organisation by specifying the name, full name, description, VM Quota , vApp lease etc… Quota, lease etc..Cloud Provider must need to enter based on the commitment with tenant organization.

#Create a new org name "tfcloud"
resource "vcd_org" "tfcloud" {
  name              = "terraform_cloud"
  full_name         = "Org created by Terraform"
  is_enabled        = "true"
  stored_vm_quota   = 50
  deployed_vm_quota = 50
  delete_force      = "true"
  delete_recursive  = "true"
  vapp_lease {
    maximum_runtime_lease_in_sec          = 0
    power_off_on_runtime_lease_expiration = false
    maximum_storage_lease_in_sec          = 0
    delete_on_storage_lease_expiration    = false
  }
  vapp_template_lease {
    maximum_storage_lease_in_sec       = 0
    delete_on_storage_lease_expiration = false
  }
}

Step-3: Code for Creating Organisation Administrator

Once as a provider you created Org, this org need an admin, below code will create local org admin. In this code everything is self explanatory but few important parameters explained here:

  • Resource Type – “vcd_org_user”
  • org & name – these are variable, referred in variable file.
  • role – role assigned to this user
  • password – initial password assigned
#Create a new Organization Admin
resource "vcd_org_user" "tfcloud-admin" {
  org               = vcd_org.tfcloud.name
  name              = "tfcloud-admin"
  password          = "*********"
  role              = "Organization Administrator"
  enabled           = true
  take_ownership    = true
  provider_type     = "INTEGRATED" #INTEGRATEDSAMLOAUTH stored_vm_quota = 50 deployed_vm_quota = 50 }

Step-4: Code for Creating new Organization VDC

So till now we created External Network, Organization and Organization administrator , next is to create a organization virtual data center , so that tenant can provision VMs, Containers and Applications. few important configuration parameters to consider:

  • name – vdc-tfcloud
  • Resource Type – “vcd_org_vdc”
  • Org – referring org name created in previous step
  • Allocation Pool – Pay as you go (represented as “AllocationVApp”).
  • network_pool_name – Network pool name as defined during provider config.
  • provider_vdc_name – Name of Provider VDC name.
  • Compute & Storage – Define compute and storage allocation.
  • network_quota – Maximum no. of networks can be provisioned in to this VDC
# Create Org VDC for above org
resource "vcd_org_vdc" "vdc-tfcloud" {
  name = "vdc-tfcloud"
  org  = vcd_org.tfcloud.name
  allocation_model  = "AllocationVApp"
  provider_vdc_name = "vCD-A-pVDC-01"
  network_pool_name = "vCD-VXLAN-Network-Pool"
  network_quota     = 50
  compute_capacity {
    cpu {
      limit = 0
    }
    memory {
      limit = 0
    }
  }
  storage_profile {
    name    = "*"
    enabled = true
    limit   = 0
    default = true
  }
  enabled                  = true
  enable_thin_provisioning = true
  enable_fast_provisioning = true
  delete_force             = true
  delete_recursive         = true
}

Step-5: Code for Creating Edge Gateway for Tenant

This next section creates a new vCloud Organization Edge Gateway by specifying the name, full name, and description. Provider configures an edge gateway to provide connectivity to one or more external networks.

  • Resource Type – “vcd_edgegateway”
  • Configuration – compact
  • Advanced – this will be an advance edge
  • distributed_routing – distributed routing is enabled
  • external_network – uplink information towards DC exit.
# Create Org VDC Edge for above org VDC
resource "vcd_edgegateway" "gw-tfcloud" {
  org                     = vcd_org.tfcloud.name
  vdc                     = vcd_org_vdc.vdc-tfcloud.name
  name                    = "gw-tfcloud"
  description             = "tfcloud edge gateway"
  configuration           = "compact"
  advanced                = true
  external_network {
     name = vcd_external_network.extnet-tfcloud.name
     subnet {
        ip_address            = "10.120.30.11"
        gateway               = "10.120.30.1"
        netmask               = "255.255.255.0"
        use_for_default_route = true
    }
  }
}

Step-6: Code for Creating Organization Routed Network

An organization VDC network with a routed connection provides controlled access to machines and networks outside of the organization VDC. System administrators (Providers) and organization administrators can configure network address translation (NAT) and firewall settings on the network’s Edge Gateway to make specific virtual machines in the VDC accessible from an external network. Things to consider:

  • resource -> must be of type “vcd_network_routed”
  • Define other networking information
# Create Routed Network for this org
resource "vcd_network_routed" "net-tfcloud-r" {
  name         = "net-tfcloud-r"
  org          = vcd_org.tfcloud.name
  vdc          = vcd_org_vdc.vdc-tfcloud.name
  edge_gateway = vcd_edgegateway.gw-tfcloud.name
  gateway      = "192.168.200.1"
  static_ip_pool {
    start_address = "192.168.200.2"
    end_address   = "192.168.200.100"
  }
}

Step-7: Code for Creating Org Isolated Network

An isolated organization VDC network provides a private network to which virtual machines in the organization VDC can connect. This network provides no connectivity to machines outside this organization VDC. Things to consider:

  • resource -> must be of type “vcd_network_isolated”
  • Define other networking information like ips etc
# Create Isolated Network for this org
resource "vcd_network_isolated" "net-tfcloud-i" {
  name    = "net-tfcloud-i"
  org     = vcd_org.tfcloud.name
  vdc     = vcd_org_vdc.vdc-tfcloud.name
  gateway = "192.168.201.1"
  static_ip_pool {
    start_address = "192.168.201.2"
    end_address   = "192.168.201.100"
  }
}

Step-8: Code for Creating Organisation Direct Network

This is restricted to System Administrator of vCloud Director Cloud Providers, A System Administrator can create an organization virtual datacenter network that connects directly to an IPv4 or IPv6 external network. VMs on the organization can use the external network to connect to other networks, including the Internet. Things to consider:

  • resource -> must be of type “vcd_network_direct”
  • Define other networking information
  • In this we are connecting directly to external network which created in step-1
# Create Direct Network for this org
resource "vcd_network_direct" "net-tfcloud-d" {
  name             = "net-tfcloud-d"
  org              = vcd_org.tfcloud.name
  vdc              = vcd_org_vdc.vdc-tfcloud.name
  external_network = "extnet-tfcloud"
}

Step-9: Code for Creating Organization Catalog

Catalog allow Tenant to group vApps and media files , in this step provider is providing a private catalog to tenant,Things to consider:

  • resource -> must be of type “vcd_catalog”
  • Define catalog related information information
# Create Default catalog for this org
resource "vcd_catalog" "cat-tfcloud" {
  org         = vcd_org.tfcloud.name
  name        = "cat-tfcloud"
  description = "tfcloud catalog"
  delete_force     = "true"
  delete_recursive = "true"
  depends_on       = [vcd_org_vdc.vdc-tfcloud]
}

Step-10: Code for Uploading  OVA in to Catalog

It’s up to provider, they can upload few catalog items like .iso and .ova for tenants to consume in to above private catalog or can share with them public catalog, in this case we are uploading few items in to this private catalog.Things to consider:

  • resource -> must be of type “vcd_catalog_item”
  • ova_path -> it will be a path of your local directory to upload image.
  • Define catalog related information information
# Create Default catalog for this org
resource "vcd_catalog_item" "photon-hw11" {
  org     = vcd_org.tfcloud.name
  catalog = vcd_catalog.cat-tfcloud.name
  name                 = "photon-hw11"
  description          = "photon-hw11"
  ova_path             = "/Users/tripathiavni/desktop/Sizing/photon-hw11-3.0-26156e2.ova"
  upload_piece_size    = 5
  show_upload_progress = "true"
}

Step-11: Code for Creating vAPP

In this step as a provider we are creating a vAPP which will hold few client binaries to run Container Service Extension. while creating vAPP things to consider:

  • resource -> must be of type “vcd_vapp”
  • ova_path -> it will be a path of your local directory to upload image.
  • Define catalog related information information
# Create vApp for this org
resource "vcd_vapp" "CSEClientVapp" {
  name             = "CSEClientVapp"
  org              = vcd_org.tfcloud.name
  vdc              = vcd_org_vdc.vdc-tfcloud.name
  # This dependency is must to avoid a lock during destroy
  depends_on = [vcd_network_routed.net-tfcloud-r]
}

Step-12: Code for Creating Virtual Machine

In above vAPP , we will add a VM which is running Photon OS, while creating VM,  Things to consider:

  • resource -> must be of type “vcd_vapp_vm”
  • catalog_name -> Catalog that we created in above step.
  • Define other VM related information.
# Create Default catalog for this org
resource "vcd_vapp_vm" "CSEClientVM" {
  name         = "CSEClientVM"
  org          = vcd_org.tfcloud.name
  vdc          = vcd_org_vdc.vdc-tfcloud.name
  vapp_name    = vcd_vapp.CSEClientVapp.name
  catalog_name = vcd_catalog.cat-tfcloud.name
  template_name = vcd_catalog_item.photon-hw11.name
  cpus = 2
  memory = 1024
  network {
    name               = vcd_network_routed.net-tfcloud-r.name
    type               = "org"
    ip_allocation_mode = "POOL"
  }
}

Putting it all together:

So i have put all this code in to a single file and also created a variable file, which will allow providers to on-board a new Tenant less then “5 minute” , provider admin just need to update few parameters in to the variable file like:

  • vcd_user -> Cloud Admin user name
  • vcd_pass -> Cloud Admin password
  • vcd_url -> Cloud Director provider URL

1

Once you input the parameters, run terraform plan and Apply the plan, this entire process should not take more than 10 minutes to complete.

  • Terraform Plan -out m4.tfplan
    • This slideshow requires JavaScript.

As you can see in above images terraform plan will add “12” items in to my Cloud Director.

  • Terraform apply “m4.tfplan”
    • This slideshow requires JavaScript.

Finally terraform created all the 12 resources that we expected it to create.

Result:

As described above all 12 tasks related to a Tenant on-boarding got successfully completed and if you notice highlighted boxes , everything is over in less than around 8 minutes including uploading an OVA isn’t it awesome ?

NOTE: There isn’t need to define org/vdc in every resource if it is defined in provider  unless you working with a few org/VDCs.

Here i am attaching variable and code file , which you can use it in your environment by just changing variable file contents which i explained above. pls try these files in to a non-prod environment and make your self comfortable before doing it in production. here is the full content of above to Download Please share feedback , suggestion any in the comment section…

 

Automate Cloud Director Edge Firewall Rules using API

Edge Firewall Rules:

Tenant can use the edge gateway Firewall tab to add firewall rules for that edge gateway. You can add multiple NSX Edge interfaces and multiple IP address groups as the source and destination for these firewall rules.

The Firewall rules will already have a few entries pre-built in as part of preconfigured services, which you should not need to change in most cases:

Problem:

When Provider/Tenant creates firewall rules from GUI, the user is allowed to create one firewall rule at a time and at times provider/tenant want to automate firewall rule creation specially if provider/tenant has many rules to create or want to add multiple ports in to firewall rule which reduces their manual efforts..

1

Solution:

To overcome with this issue, Cloud Director offers NSX proxy API , which will help provider/tenant to automate rules. Here is process of creating firewall rule using API:

  • Find NSX Edge ID using API
  • Get Edge FW Configuration
  • Post Firewall rules

Find NSX Edge ID:

To find the NSX Edge ID , you need to fire “Get” to this API “https://<vCD>/network/edges&#8221;, here is “Get” to my API’s headers information , body will be empty.

2

Here is API Content:

This call will return All the edges and their related information, note on which edge you want to apply firewall rule and its ID.

3

Get Edge FW Configuration:

Though this step is not required but if you still want to see what all firewall rules has been configured etc..run “get” against “Edge Id”  -“https://<vCD>/network/edges/8417a9fc-c1df-4c03-befd-c79f60d5d0ab/firewall/config&#8221;

4

Post Firewall Rule:

Header Information:

To add firewall rule, we need to do “Post” against URL “https://<vCD>/network/edges/8417a9fc-c1df-4c03-befd-c79f60d5d0ab/firewall/config/rules&#8221; with following parameters:

  • 417a9fc-c1df-4c03-befd-c79f60d5d0ab – this is Edge ID which we got from Step-1
  • Content-Type – Application/xml
  • Authorization – Bearer Token

5

Body Content:

Here is body content which is self explanatory. few important items are as below:

  • Entire body must be within <firewallRules></firewallRules>
  • Within<firewallRules></firewallRules> you can write various rule within <firewallRule></firewallRule> section.
  • <service></service>  – In this section you will define “protocol” & “port”.
<firewallRules>
<firewallRule>
<name>New Rule</name>
<ruleType>user</ruleType>
<enabled>true</enabled>
<loggingEnabled>false</loggingEnabled>
<action>accept</action>
<destination>
<exclude>false</exclude>
<vnicGroupId>external</vnicGroupId>
<vnicGroupId>internal</vnicGroupId>
</destination>
<application>
<service>
<protocol>tcp</protocol>
<port>554</port>
<sourcePort>any</sourcePort>
</service>
<service>
<protocol>udp</protocol>
<port>554</port>
<sourcePort>any</sourcePort>
</service>
<service>
<protocol>tcp</protocol>
<port>556</port>
<sourcePort>any</sourcePort>
</service>
<service>
<protocol>udp</protocol>
<port>556</port>
<sourcePort>any</sourcePort>
</service>
</application>
</firewallRule>
</firewallRules>
Screenshot of my API Call body which should be in “application/xml” format.7

if you need to write multiple rules in to single API call, then you can create multiple sections of <firewallRule></firewallRule> with single section of <firewallRules></firewallRules>

Result:

Once Header and body is ready, do a post and you should get a valid response of “201 Created”

8

vCD GUI reflects what you put in to the body of the API call.

9

I hope this will help Cloud providers/tenants to automate rules which needs to be automated or there are rules which providers need to create by default when onboarding a tenant.

Onboard Tenants on Cloud Director in less than 5 Minutes using vCD Terraform Provider

In continuation to my last post, In this post we are going to onboard a tenant using vCloud Director Terraform provider , there are five things that we are going to do:

avn

  • Create a new Organisation for the Tenant
  • Create a new Organisation Administrator for this Tenant
  • Create a new Organisation VDC for the Tenant
  • Deploy a new Edge gateway for the Tenant
  • Create a new routed Network for the Tenant

Code for New Organisation:

So in this section , we are going to create a new organisation names “T3” which is enabled to use, This section creates a new vCloud Organisation by specifying the name, full name, and description.

#Create a new org names "T3"
resource "vcd_org" "org-name" {
  name             = "T3"
  full_name        = "My organization"
  description      = "The pride of my work"
  is_enabled       = "true"
  delete_recursive = "true"
  delete_force     = "true"
}

Code for Creating Organisation Administrator:

Once as a provider you created Org, this org need an admin, below code will create local org admin. In this code everything is self explanatory but few important parameters explained here:

  • Resource Type -> “vcd_org_user”
  • org & name -> these are variable, referred in variable file.
  • role -> role assigned to this user
  • password -> initial password assigned
  • depends_on -> Explicit dependencies that this resource has. These dependencies will be created before this resource
#Create a new Organization Admin
resource "vcd_org_user" "org-admin" {
org = var.org_name #variable referred in variable file 
name = var.org_admin #variable referred in variable file
description = "a new org admin"
role = "Organization Administrator"
password = "change-me"
enabled = true
email_address = "avnish@t3company.org"
depends_on = [vcd_org.org-name]
}

Code for Creating new Organisation VDC:

So till now we created Org and Org admin , next is to create a organisation virtual data center , so that tenant can provision VMs, Containers and Applications. few important configuration parameters to consider:

  • name -> T3-vdc
  • Org -> T3
  • Allocation Pool -> Pay as you go (represented as “AllocationVApp”).
  • network_pool_name -> Network pool name as defined during provider config.
  • provider_vdc_name -> Name of Provider VDC name.
  • Compute & Storage -> Define compute and storage allocation.
  • VM_quota -> Maximum no. of vms can be provisioned in to this VDC
  • network_quota -> Maximum no of networks can be created.
# Create Org VDC for above org
resource "vcd_org_vdc" "vdc-name" {
  name        = var.vdc_name
  description = "The pride of my work"
  org         = var.org_name #variable referred in variable file
  allocation_model = "AllocationVApp"
  network_pool_name = "PVDC-A-VXLAN-NP"
  provider_vdc_name = "PVDC-A"
  compute_capacity {
    cpu {
      limit = 0
    }
    memory {
      limit = 0
    }
  }
  storage_profile {
    name     = "*"
    limit    = 10240
    default  = true    
  }
  metadata = {
    role    = "For Customer T3"
    env     = "staging"
    version = "v1"
  }  
  vm_quota                 = 10 #Max no. of VMs 
  network_quota            =  100
  enabled                  = true
  enable_thin_provisioning = true
  enable_fast_provisioning = true
  delete_force             = true
  delete_recursive         = true
depends_on = [vcd_org.org-name]
}

Code for Creating Edge Gateway for Tenant

This next section creates a new vCloud Organisation Edge Gateway by specifying the name, full name, and description. Provider configures an edge gateway to provide connectivity to one or more external networks.

  • Configuration -> compact
  • Advanced -> this will be an advance edge
  • distributed_routing -> distributed routing is enabled
  • external_network ->  uplink information towards DC exit.
  • You will notice there is a ‘depends_on’ setting. This means that this resource depends on the resource specified before executing.
resource "vcd_edgegateway" "egw" {
  org = var.org_name #variable referred in variable file
  vdc = var.vdc_name #variable referred in variable file
  name                    = var.edge_name
  description             = "T3 new edge gateway"
  configuration           = "compact"
  advanced                = true
  distributed_routing     = true
  external_network {
    name = "SiteA-ExtNet"
    subnet {
      ip_address            = "192.168.100.20"
      gateway               = "192.168.100.1"
      netmask               = "255.255.255.0"
      use_for_default_route = true
    }
  }
depends_on = [vcd_org_vdc.vdc-name]
}

Code for Creating Organisation Routed Network

An organization VDC network with a routed connection provides controlled access to machines and networks outside of the organization VDC. System administrators (Providers) and organization administrators can configure network address translation (NAT) and firewall settings on the network’s Edge Gateway to make specific virtual machines in the VDC accessible from an external network. Things to consider:

  • resource -> must be of type “vcd_network_routed”
  • Define other networking information
resource "vcd_network_routed" "net" {
org = var.org_name #variable referred in variable file
vdc = var.vdc_name #variable referred in variable file
name = "T3-Routed-net"
edge_gateway = var.edge_name 
gateway = "10.10.0.1"
dhcp_pool {
start_address = "10.10.0.2"
end_address = "10.10.0.100"
}
static_ip_pool {
start_address = "10.10.0.152"
end_address = "10.10.0.254"
}
depends_on = [vcd_edgegateway.egw]
}

Putting it all together:

So i have put all this code in to a single file and also created a variable file, which will allow providers to on-board a new Tenant less then “5 minute” , provider admin just need to update few parameters in to the variable file like:

  • org_name -> Tenant organisation name
  • vcd_name -> Tenant Org VDC Name
  • edge_name -> Tenant N/S router name
  • org_admin -> Org Admin name

15

Once you input the parameters, run terraform plan and Apply the plan, this enitre process should not take more than 5 minutes to complate.

  • Terraform Plan -out f1.tfplan
    • 16
  • Terraform apply “f1.tfplan”
    • 17

Result:

As described above all five tasks related to a Tenant on-boarding got successfully completed and if you notice highlighted boxes , everything is over in less than 2 minutes, isn’t it awesome ?

18

Here i am attaching variable and code file , which you can use it in your environment by just changing variable file contents like , org_name , vdc_name etc..which i explained above. pls try these files in to a non-prod environment and make your self comfortable before doing it in production.Here is the Code file to download – Terraform.zip. Please share feedback , suggestion any in the comment section…

 

Automate vCloud Director with Terraform Provider

The new refreshed Terraform vCloud Director provider enables administrators and DevOps engineers to define vCloud Director “infrastructure as code” inside Terraform configuration files. This makes it an efficient automation and integration tool and this project is fully open-source and available on GitHub and also HashiCorp is hosting it in the “terraform-providers” namespace together with all the other official Terraform providers.

If you’d like to contribute with a feature request, report an issue or propose a code improvement please visit the project’s site below. There you can also see current activity and what’s in the works.

Terraform Installation & Configuration:

Terraform installation is very simple, it is just a single file. if you are running Linux OS system it is “terraform” and if it is windows based systems then it is terraform.exe.You can download the latest version of Terraform from the HashiCorp website using this direct link: https://www.terraform.io/downloads.html

I am using Windows in my Lab so for my Windows based system, I simply downloaded the Terraform Windows x64 file and put into a folder called “c:\tf” then added this folder into my PATHS variable so that I could run terraform.exe from anywhere. This can be done by going to System Properties –> Advanced –> Environment Variable

8

now go to “System variables” and add :terraform.exe” folder location.

  • Select Path and Click on “Edit”
  • at the end of the line put symbol “;” and add “terraform.exe” directory path.

9

Download Terraform vCloud Director Plugin and VS Code:

Download the latest vCloud director terraform plugin from here  and put in to a directory and we will use this during terraform initialization.

Now a days i start using Microsoft Visual Studio Code to write my automation scripts. It has inbuilt terraform plugin to highlight/validate code which makes it way more simple, then working with different text editors and multiple windows. It’s a free download, check it out or you are free to use any other text editor of your choice.

Create Terraform files:

  1. Create a new folder
  2. Create two terraform files (terraform.tfvars & main.tf) as below and put the below content in to the files
  3. Save both the files in to the folder which we created in to step1
  4. terraform.tfvars:
    • This is where we give value to the variables. For example vcd_url = “https://vcd-01a.corp.local/api”, this means that anytime var.vcd_url is referenced in the “main.tf file” it will reference back to . Most variables below are self-explanatory.
    • # vCloud Director Connection Variables
      vcd_user = "administrator"
      vcd_pass = "VMware1!"
      vcd_url = "https://vcd-01a.corp.local/api"
      vcd_max_retry_timeout = "60"
      vcd_allow_unverified_ssl = "true"
    • 10
  5. main.tf – This will have actual actionable code which will perform action on to vCloud Director.We do this by using a provider and multiple resources. The provider we are using in this demonstration is “vcd”. The resources are then responsible for different parts of vCloud Director. In this example “vcd_org” is going to create, modify or delete an Org. we are created new VCD Organisation.
    • variable "vcd_user" {
      description = "vCloud user"
      }
      variable "vcd_pass" {
      description = "vCloud pass"
      }
      variable "vcd_allow_unverified_ssl" {
      default = true
      }
      variable "vcd_url" {}
      variable "vcd_max_retry_timeout" {
      default = 60
      }
      
      # Connect VMware vCloud Director Provider
      provider "vcd" {
      user = var.vcd_user
      password = var.vcd_pass
      org = "System"
      url = var.vcd_url
      max_retry_timeout = var.vcd_max_retry_timeout
      allow_unverified_ssl = var.vcd_allow_unverified_ssl
      }
      
      #Create a new org names "T3"
      resource "vcd_org" "org-name" {
      name = "T3"
      full_name = "My organization"
      description = "The pride of my work"
      is_enabled = "true"
      delete_recursive = "true"
      delete_force = "true"
      }
    • 11

Initialize terraform plugin:

The terraform “init” command is used to initialize a working directory containing Terraform configuration files. This is the first command that you should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.this command will never delete your existing configuration or state.

During init, Terraform searches the configuration for both direct and indirect references to providers and attempts to load the required plugins. For providers distributed by HashiCorp, init will automatically download and install plugins if necessary.

Since on my windows init didn’t downloaded plugin for some reason , so i have downloaded vCD plugin in above steps and during initialization i am pointing towards my directory and it says terraform will use vCloud Director Plug-in version 2.6.

1

Terraform Plan:

The terraform “plan” command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state.

“Terraform plan” command will check above created files and check what changes it needs to do on the environment and will give you summary of tasks.

2

Terraform Apply

The “terraform apply” command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

3

Org Created Successfully:

above step created an organisation in to my vCloud Director environment and it took only few minutes.

1413

lots of development is happening on terraform  provider for vCloud Director which will help VMware Cloud provider to automate repeated tasks. i will continue to add few more blog articles on this topic, stay tuned….

 

Configure HCX for Cloud to Cloud Migrations

VMware HCX is an application mobility platform that is designed for simplifying application migration across cloud, workload rebalancing, and business continuity across data centers and across clouds.

Application migration

You can schedule and migrate thousands of vSphere virtual machines within and across data centers or Clouds without requiring a reboot.

Change platforms or upgrade vSphere versions

With HCX, you can migrate workloads from vSphere 5.x and non-vSphere (KVM and Hyper-V) environments within and across data centers or clouds to current vSphere versions without requiring an upgrade.

Workload rebalancing

Workload rebalancing provides a mobility platform across cloud regions and cloud providers to allow customers to move applications and workloads at any time to meet scale, cost management, compliance, and vendor neutrality goals.

HCX Deployment Types

  • Legacy vSphere to SDDC

    • In this deployment type, the HCX Manager at the Legacy site initiates Site Pairing, and the Service Mesh appliances initiate the Interconnect tunnels. The HCX Manager and Service Mesh appliances at the SDDC site are the receivers.
  • Legacy vSphere to Public Cloud

    • In this deployment type, the HCX Manager at the Legacy site initiates Site Pairing, and the Service Mesh appliances initiate the Interconnect tunnels. The HCX Manager and Service Mesh appliances at the Public Cloud are the receivers.
  • Cloud-to-Cloud

    • In this deployment type, the HCX Manager at the SDDC or Public Cloud can initiate or receive Site Pairing requests and act as the initiator or receiver during the HCX Interconnect tunnel creation.

In this post we are going to configure HCX cloud to cloud between two SDDC deployed on VMware Cloud on AWS

3Pre-requisite

  • Deploy SDDC on Site A
  • Deploy SDDC on Site B
  • Deploy HCX on Site A
  • Deploy HCX on Site B

Connectivity Options

  • Over EIP
  • Over Direct Connect

In this post i will be configuring HCX cloud to cloud migration over EIP as HCX sets up its own secure tunnel with military grade encryption, so it is secure to migrate over EIP but you can also setup using VPN connection for management network or you can setup using direct connect between two clouds also.

Open Firewall

Once both the SDDC and HCX has been deployed , go ahead inside “Network & Security” section and create firewall rules for HCX as below inside management gateway.

4

with above firewall rules , you should be able to access HCX over the public IP.

Site Pairing

  • On HCX console , go to “Site Pairing” and enter the second site HCX server name and credential.

This slideshow requires JavaScript.

  • if you want to do reverse migration then again do pairing from second site to primary site.

This slideshow requires JavaScript.

Create Service Mesh

  • Click Create Service Mesh
  • Select Sites:
    • Click each drop-down and select a source and destination site. Only connected Site Pairs are displayed
    • Click Continue.
  • Select Compute Profiles: 
    • Click the Select Source Compute Profile drop-down and select a Compute Profile.
    • Click the Select Remote Compute Profile drop-down and select a Compute Profile.
    • Click Continue.

NOTE – you can pickup existing compute profile or you can create a new profile.

  • By default, the HCX interconnect uses the Uplink Network Profiles defined in the Compute Profile for the source and destination sites. You can override the default As an example, an override can be useful in vCloud Director-based deployments where an uplink network that deviates from a common configuration created for an Organization to consume during the Service Mesh creation.
    • Click the Select Source Uplink Network Profile drop-down.
    • In VMC case we will choose “External Network” which is already having two public IPs.
    • Click Close.
    • Click Continue.
  • Configure the Network Extension appliances deployed per switch and click Continue.
  • Optionally Configure HCX Traffic Engineering features such as Traffic bandwidth control etc..
  • Review the topology and name the service Mesh and click Finish.
  • Wait for few minutes , this will deploy all the appliances and show connectivity across site is up.

This slideshow requires JavaScript.

if everything is done correctly , you should show all the HCX services are up and running.

7

Configure Network Extension

  • In the HCX dashboard, select Network Extension.
  • At the top of the page, select Extend Network.
  • Select one or more Distributed Port Groups or NSX Logical Switches.
  • Specify network segment which need to be extended.

This slideshow requires JavaScript.

Live Migrate a VM from one Cloud to another Cloud

  • Choose a VM which you want to migrate from one Cloud to another Cloud
  • Specify destination resource pool , storage , network etc and click on Migrate
  • check the progress of migration in vCenter , your application user even will not come to know that their application has been migrated to another cloud/sddc.

This slideshow requires JavaScript.

If you see with this HCX can easily migrate application based on requirement across cloud like AWS, IBM, Oracle or Azure or VCPP based or back to on-prem or hosted  clouds. Most important is that HCX solves problem of Cloud lock in. it gives freedom to customer to move VM across cloud with maximum security , most faster and without impact to applications.

11

please share feedback 🙂

 

 

 

vCloud Director 10 – NSX-T – Tenant Configuration

In continuation of my previous post , in this post i will be covering tenant side configuration of vCloud Director 10 along with NSX-T.

Create OrgVDC

To provide resources to an Tenant organization, you create one or more organization virtual data centers for tenant organization.To Create an OrgVDC , you need to go to “Cloud Resources” then “Organization VDCs” and Click on New:

  1. Name Tenant OrgVDC appropriately
  2. Select the Organisation
  3. Select the PVDC which is NSX-T backed.
  4. Choose appropriate allocation model (flex)
  5. Configure reservation pool related settings
  6. Choose appropriate storage policy
  7. enable “Network Pool” and select correct network pool and specify max networks
  8. Review and click on Finish.

This slideshow requires JavaScript.

Create Org Edges

To connect tenants networks created inside org vDC to out side , we need to create edge gateways, which internally automatically create T1 router, here are the steps to create edge:

  1. Login to tenant by clicking on “Open in Tenant Portal” and go to Edges & click  New
    • 29.png
  2. Name Tenant edge appropriately
  3. Select IP segment and reserve few IPs to talk to external world.
  4. Review configuration and submit

This slideshow requires JavaScript.

If you look back in to NSX-T , this will create a Tier-1 router automatically and connect it to Tier-0 router.

35.png

Org Edge supported Tenant Operation:

Currently the following T1 GW networking services are available to tenants:

  • Firewall
  • NAT
  • DHCP (without binding and relay)
  • DNS forwarding
  • IPSec VPN with API only and only apply Policy based with pre share key.

42.png

Create Org Networks

The first network to create for tenant is an organization Virtual Datacenter network, An orgVDC network allows virtual machines in the orgVDC to communicate with each other and to access other networks, including orgVDC networks and external networks, either directly or through an Edge Gateway (T0) that can provide firewall and NAT services as of now. There are three type of org Networks:

Isolated:

You can add an isolated orgVDC network, which is accessible only by this organization. This network provides no connectivity to virtual machines outside this organization. Virtual machines outside of this organization have no connectivity to the virtual machines in the organization.

Routed:

Routed network control the access to an external network, System administrators and organization administrators can configure network address translation (NAT), firewall, and VPN settings to make specific virtual machines accessible from the external network.

Imported:

You can import existing NSX-T overlay switch in to org , for this networking type all networking need to be configured and managed out side of vCloud Director.

This slideshow requires JavaScript.

Tenant VM External Access:

As i said tenant networks are not advertised , we need to create SNAT rules to allow external access:

44.png

43

NOTE – Tenant can only self service Isolated and routed networks, there are few options like DFW and Load Balancer still has not been exposed to tenants.

 

 

vCloud Director 10 – NSX-T – Provider Configuration

As you may be aware vCloud Director from its inception initially was relying on vCNS and after that on NSX-V to provide on-demand , self service cloud networking capabilities and now since VMware is moving towards newly re-written networking platform called NSX-T and with every new version , it is getting mature and feature rich , vCloud Director with version 10 brings many of its capabilities in to it to offer more and more self service capabilities to tenant and ease of implementation and operation for providers, in this post i am covering how to integrate NSX-T with vCD from Provider prospective.

Pre-requisite

As you may be aware that NSX-T is no more coupled/dependent on vCenter ,so to integrate NSX-T with vCloud Director you must install and configure NSX-T Data Center. Here are the high level steps:

  • Deploy and configure the NSX-T Manager virtual appliances.
  • Create transport zones based on your networking requirements.
  • Deploy and configure Edge nodes and an Edge cluster.
  • Configure the ESXi host transport nodes, these will become PVDC resources of NSX-T based tenants.
  • Create a tier-0 gateway , this will work as “External Network” for vCloud Director.

Register NSX-T Manager

Once NSX-T setup is done, login to vCloud Director with administrator credential and  Click on “vSphere Resources” and go to NSX-T Managers to add NSX-T manager.
12

Create Network Pool

A network pool is a group of undifferentiated networks that is available for use in an organization virtual datacenter to create vApp networks and certain types of organization virtual datacenter networks.
so once NSX-T manager is added , next thing is we need to create network pool and to create network pool  , go back to “Cloud Resources” , go to “Network Pools” and Click on new:
3.png
Here is the creation of Network pool steps:
  1. Name it appropriately.
  2. Select “Geneve Backed” type Network pool
  3. Select Appropriate NSX-T Providers (you can have multiple NSX-T Providers)
  4. Select Appropriate Overlay Transport Zone
  5. review and submit.

This slideshow requires JavaScript.

Configure External Networks

External networks helps providing a connection to the outside the world (internet). external networks are backed up by NSX-T Tier-0 router.

As i said in pre-requisite section , you need to manually create Tier0 in NSX-T, this T0 router will provide external network access to your tenant and should be routable from Internet. Create an Active-Active T0 with ECMP mode is recommended practice.

14.png

Once T0 is created , you will then import T0 in to vCloud Director 10. you will also need to define IP pool , which will be used to sub-allocate IPs to Tenants.


Below is the process to create vCloud Director 10 external network by importing Tier0  router created in side NSX-T.

  1. Choose Backing Type as “NSX-T Resources (Tier-0 Router)” and select registered NSX-T
  2. Provide Name
  3. Select Tier-0 router
  4. Add a “Network Pool” with Gateway details.
  5. review and complete , which will import T0 in to vCloud Director construct.

This slideshow requires JavaScript.

Create Provider VDC

Now you can create Provider VDC (PVDC) which is basically mapped to a vSphere cluster or a resource pool. PVDC to successfully work you need to ensure that vSphere cluster has been prepared with NSX-T and part of a transport zone.When creating NSX-T backed PVDC you will have to specify the Geneve Network Pool created in the previous step.

Go to “Cloud Resources” – “Provider VDCs” and Click on “NEW” to create new PVDC backed by NSX-T based networks.

  1. Name your PVDC
  2. Select vCenter which is having NSX-T backed Cluster
  3. Select appropriate Cluster and VM Hardware version
  4. Select appropriate Storage policy
  5. Select NSX-T manager and Network Pool ( as created above – Geneve backed pool )
  6. Review configuration and finish.

This slideshow requires JavaScript.

if everything is configured properly, PVDC get created successfully.

21.png

This completes vCloud Director configuration from provider prospective. In the next post i will be covering tenant onboarding process on NSX-T based Network.

 

vCloud Director 10 : T-Shirt Sizing

In vCloud Director 9.7 compute policies were introduced to offer/manage the T-shirt sizing of the VMs which i have covered in detail in my this post,  in vCloud Director 10 similar concept has been brought in to GUI , which is now easy to implement & manage and in vCD 10 this is being called “Sizing Policy”

So from Cloud Provider prospective VM sizing policy defines the compute resource allocation for virtual machines within an organization VDC. Sizing policy allow provider to control the following aspects of compute resources consumption at the virtual machine level:

  • Number of vCPU, vCPU clock speed, reservations, limits and shares
    • 1.png
  • Amount of memory allocated to the virtual machine , reservation, limits and shares.
    • 2.png

Create T-Shirt Sizes:

Let’s create few example T-Shirt sizing policies:

  • Policy Name – X1
    • Description: Small-size VM policy Memory: 1024 Number of vCPUs: 1
    • Name: X1
    • Memory: 1024
    • Number of vCPUs: 1
  • Policy Name – X2
    • Description: Medium-size VM policy Memory: 2048 Number of vCPUs: 2
    • Name: X2
    • Memory: 2048
    • Number of vCPUs: 2
  • Policy Name – X3
    • Description: Large-size VM policy Memory: 4096 Number of vCPUs: 4
    • Name: X3
    • Memory: 4096
    • Number of vCPUs: 4
  • Policy Name – X4
    • Description: X-Large-size VM policy Memory: 8192 Number of vCPUs: 8
    • Name: X4
    • Memory: 8192
    • Number of vCPUs: 8

Create T-Shirt Sizing policies:

  1. Cloud Provider Administrator, logins to vCloud Director and go to “VM Sizing Policies” and Click on “New” to create new policy
    • 7.png
  2. Name and describe the policy as per above example and move to Next.
    • 3.png
  3. In next section enter CPU related parameters , in this example i am choosing “vCPU Count” , providers can choose based on their requirement and leave it all blank as none of the fields are mandatory.
    • 4.png
  4. In next section enter Memory related parameters , in this example i am choosing only “Memory”, providers can choose based on their requirement and leave it all blank as none of the fields are mandatory.
    • 5.png

that’s it , so simple to create policies , follow the same step to create multiple policies as per above example.

Publish Created Policies:

vCloud Director system administrators create and manage VM sizing policies at a global level and can publish individual policies to one or more organization VDCs.

so above step we have created polices , we need to publish these policies to organisation VDC’s.

  1. Select Cloud Resources then click on Organization VDCs and go inside an organization VDC
    • 8.png
  2. Inside VDC , go to VM Sizing Policies and click on Add
    • 9.png
  3. Select the policies that you want to make available for a Particular oVDC/Tenant
    • 10.png
  4. You can set policies as default policies, which will make policy appear as the default choice for the tenants during a VM and vApp creation and VM edit.
    • 11.png

Once polices published to organisation’s VDC, when tenant user logins and try to deploy a new VM , he/she now see options to chose T-Shirt sizes with their descriptions and if user does not choose any policy , it will pickup default policy and i showed you how to setup default policy.12.png

Tag Template with T-Shirt sizes

So while cloud providers can control sizing of new virtual machines, how about Templates ?

vCloud Director helps providers to achieve this by associating  the VMs of a vApp template with specific VM sizing policies, Providers/tenant  can tag individual VMs of a vApp template with the policies you want to assign.

To Tag template to  a particular sizing policy, you need to login to org and then go to Libraries, and select vApp Templates from the left panel.

13.png

Click on particular template/highlight the template and select Tag with Compute Policies.

14.png

“TAG WITH COMPUTE POLICIES” gives two options to tag with:

  • VM Placement Policies – which allows VM to deploy in to particular cluster.
  • VM Sizing Policy – As explained in this Post, so when user will try to deploy a VM from template, it will get deployed according to “VM Sizing Policy”

15.png

This completes the process , gain control of your cloud offerings.

 

 

vCloud Director 10 : VM Placement Policies

vCloud Director 10 has introduced a new concept called VM placement policies which helps Cloud Provider to control the virtual machine (VM) placement on a specific cluster or host.VM placement policies give cloud providers various options to allocate resources to various use cases like:

  1. Deploy VM’s to specific cluster based on performance requirement
  2. Deploy VM’s to Specific cluster based on resource requirements
  3. Deploy VM’s based on Licensing requirement as a part of Oracle/SQL licenses optimisation
  4. Allocate specific hosts to specific Tenants
  5. Deploy container/special use case specific VMs to a specific host/cluster
  6. Restrict elastic VDC pools to deploy VMs to a specific cluster

vCD Provider administrator create and manage VM placement policies and placement policies are created and managed for each provider VDC, because a VM placement policy is scoped at the provider VDC level.

Create a VM Placement Policy

Before we create VM Placement policies, provider need to perform few steps on vCenter , so lets go and login to vCenter which is providing resource to vCloud Director and go to Cluster -> Configure -> VM/Host Groups

1.png

In this case i want to limit deployment of Oracle and MS SQL VM’s to specific hosts due to licensing, so let’s create Hosts groups and VM Groups:

Host Groups: 

To create Host Groups , Click on Add inside VM/Host Groups:

  1. Enter  Host Group Name
  2. Select Type as “Host Group”
  3. Click on Add to add Host/Hosts of the cluster.

2

VM Groups

To create VM Groups , Click on Add inside VM/Host Groups

  1. Enter  VM Group Name
  2. Select Type as “VM Group”
  3. Click on Add to add VM/VMs of the cluster. (select any dummy VM as of now)

3

once both the groups has been created go to VM/Host rules in the cluster and create a rule.

4.png

VM/Host Rules

To create VM/Host Rules, Click on Add inside VM/Host Rules

  1. Enter  Rule Name
  2. Ensure “Enable rule”
  3. Select rule type as “Virtual Machine to Hosts”
  4. VM Group: Select VM Group that we have created above
  5. Here you have four choices: (In my case i have choose Must rule)
    • Must run on host in group
    • Should run on host in group
    • Must not run on host in group
    • Should not run on host in group
  6. Host Group: Select Host Group that we have created above

5.png

From vCenter prospective we are done, we have multiple choice to create VM to Hosts affinity/anti-affinity rules , once we have created rules , vCloud director picks up only “VM Groups” which provider will expose to tenants.

Create VM Placement Policies in vCloud Director

  1. Go to Provider VDCs.
  2. Click on a provider VDC from the list , in my case it was “nsxtpvdc”
  3. Click on “VM Placement Policies”
  4. Click the VM Placement Policies tab and click New.

6

New Policy Creation Wizard

  1. First Page , click on Next
    1. 7
  2. Enter a name for the VM placement policy and description and click Next
    1. 8.png
  3. Select the VM groups or logical VM groups to which you want the VM to be linked and click Next.
    1. 9.png

  4. Review the VM placement policy settings and click Finish.
    1. 10.png

Publish VM Placement Policies to Org VDC

When provider creates a VM placement policy, it is not visible to tenants. Provider need to publish a VM placement policy to an org VDC to make it available to tenants and publishing a VM placement policy to an org VDC makes the policy visible to tenants. The tenant can select the policy when they:

  • Create a new standalone VM
  • Create a VM from a template,
  • edit a VM
  • add a VM to a vApp
  • Create a vApp from a vApp template. 

To publish this newly created policy to tenants , go to:

  1. Organization VDCs and Select an organization VDC
    1. 11.png
  2. Click the VM Placement Policies tab and Click Add.
    1. 12.png
  3. Select the VM placement policies that you want to add to the organization VDC and click OK.
    1. 13.png
  4. Provider can make certain policies as “Default” when customer does not choose any policy , system will automatically use “Default”.
    1. 14.png

Policy Usage by Tenant

Once policies has been created and exposed to tenant organisation, tenant can use those policies while provisioning VMs. like here i have created two policies “Oracle” and “SQL” and tenant can choose based on workload requirement.

15.png

NOTE –  Placement Policies are optional and a provider can continue to use the default policy that is created during installation and only one policy can be assigned to a VM.

This completes the creation of placement policies and their exposure to tenants. please feel free to share/comment.