Tag: VMWARE

  • Building Windows Custom Machine Image for Creating Tanzu Workload Clusters

    If your organisation is building an application based on Windows components (such as .NET Framework) and willing to deploy Windows containers on VMware Tanzu, this blog post is on how to build a Windows custom machine image and deploy windows Kubernetes cluster.

    Windows Image Prerequisites 

    • vSphere 6.7 Update 3 or greater
    • A macOS or Linux workstation, Docker Desktop and Ansible must be installed on workstation
    • Tanzu Kubernetes Grid v1.5.x or greater
    • Tanzu CLI
    • A Recent Image of Windows 2019 (newer than April 2021) and must be downloaded from Microsoft Developer Network (MSDN) or Volume Licensing (VL) account.
    • The latest VMware Tools Windows ISO image. Download from VMware Tools
    • on vCenter, Inside a data store create a folder such as iso and upload windows ISO and VMware Tools iso

    Build a Windows Image 

    • Deploy Tanzu Management Cluster with Ubuntu 2004 Kubernetes v1.22.9 OVA
    • Create a YAML file named builder.yaml with the following configuration, On my local system I have saved this yaml as builder.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
     name: imagebuilder
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: imagebuilder-wrs
     namespace: imagebuilder
    spec:
     selector:
       app: image-builder-resource-kit
     type: NodePort
     ports:
     - port: 3000
       targetPort: 3000
       nodePort: 30008
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: image-builder-resource-kit
     namespace: imagebuilder
    spec:
     selector:
       matchLabels:
         app: image-builder-resource-kit
     template:
       metadata:
         labels:
           app: image-builder-resource-kit
       spec:
         nodeSelector:
           kubernetes.io/os: linux
         containers:
         - name: windows-imagebuilder-resourcekit
           image: projects.registry.vmware.com/tkg/windows-resource-bundle:v1.22.9_vmware.1-tkg.1
           imagePullPolicy: Always
           ports:
             - containerPort: 3000

    Connect the Kubernetes CLI to your management cluster by running:

    #kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER

    Apply the builder.yaml file as below:

    To ensure the container is running run below command:

    List the cluster’s nodes, with wide output and take note of Internal IP address value of the node with ROLE listed as control-plane,master

    #kubectl get nodes -o wide

    Retrieve the containerd component’s URL and SHA, Query the control plane’s  nodePort  endpoint:

    #curl http://CONTROLPLANENODE-IP:30008

    Take note of containerd.path and containerd.sha256 values. The containerd.path value ends with something like containerd/cri-containerd-v1.5.9+vmware.2.windows-amd64.tar.

    Create a JSON file in an empty folder named windows.json with the following configuration:

    {
     "unattend_timezone": "WINDOWS-TIMEZONE",
     "windows_updates_categories": "CriticalUpdates SecurityUpdates UpdateRollups",
     "windows_updates_kbs": "",
     "kubernetes_semver": "v1.22.9",
     "cluster": "VSPHERE-CLUSTER-NAME",
     "template": "",
     "password": "VCENTER-PASSWORD",
     "folder": "",
     "runtime": "containerd",
     "username": "VCENTER-USERNAME",
     "datastore": "DATASTORE-NAME",
     "datacenter": "DATACENTER-NAME",
     "convert_to_template": "true",
     "vmtools_iso_path": "VMTOOLS-ISO-PATH",
     "insecure_connection": "true",
     "disable_hypervisor": "false",
     "network": "NETWORK",
     "linked_clone": "false",
     "os_iso_path": "OS-ISO-PATH",
     "resource_pool": "",
     "vcenter_server": "VCENTER-IP",
     "create_snapshot": "false",
     "netbios_host_name_compatibility": "false",
     "kubernetes_base_url": "http://CONTROLPLANE-IP:30008/files/kubernetes/",
     "containerd_url": "CONTAINERD-URL",
     "containerd_sha256_windows": "CONTAINERD-SHA",
     "pause_image": "mcr.microsoft.com/oss/kubernetes/pause:3.5",
     "prepull": "false",
     "additional_prepull_images": "mcr.microsoft.com/windows/servercore:ltsc2019",
     "additional_download_files": "",
     "additional_executables": "true",
     "additional_executables_destination_path": "c:/k/antrea/",
     "additional_executables_list": "http://CONTROLPLANE-IP:30008/files/antrea-windows/antrea-windows-advanced.zip",
     "load_additional_components": "true"
    }

    update the values in file as below:

    Add the XML file that contains the Windows settings by following these steps:

    • Go to the autounattend.xml file on VMware {code} Sample Exchange.
    • Select Download.
    • If you are using the Windows Server 2019 evaluation version, remove <ProductKey>...</ProductKey>.
    • Name the file autounattend.xml.
    • Save the file in the same folder as the windows.json file and change permission of file to 777.

    From your client VM run following command from folder containing your windows.json and autounattend.xml file:

    #docker run -it --rm --mount type=bind,source=$(pwd)/windows.json,target=/windows.json --mount type=bind,source=$(pwd)/autounattend.xml,target=/home/imagebuilder/packer/ova/windows/windows-2019/autounattend.xml -e PACKER_VAR_FILES="/windows.json" -e IB_OVFTOOL=1 -e IB_OVFTOOL_ARGS='--skipManifestCheck' -e PACKER_FLAGS='-force -on-error=ask' -t projects.registry.vmware.com/tkg/image-builder:v0.1.11_vmware.3 build-node-ova-vsphere-windows-2019

    NOTE: Before you run below command, make sure your workstation is running “Docker Desktop” as well “Ansible”

    To ensure the Windows image is ready to use, select your host or cluster in vCenter, select the VMs tab, then select VM Templates to see the Windows image listed.

    Use a Windows Image for a Workload Cluster

    Use a Windows Image for a Workload Cluster, below yaml shows you how to deploy a workload cluster that uses your Windows image as a template. (This windows cluster is using NSX Advance LB)

    #! ---------------------------------------------------------------------
    #! non proxy env configs
    #! ---------------------------------------------------------------------
    CLUSTER_CIDR: 100.96.0.0/11
    CLUSTER_NAME: tkg-workload02
    CLUSTER_PLAN: dev
    ENABLE_CEIP_PARTICIPATION: 'true'
    IS_WINDOWS_WORKLOAD_CLUSTER: "true"
    VSPHERE_WINDOWS_TEMPLATE: windows-2019-kube-v1.22.5
    ENABLE_MHC: "false"
    
    IDENTITY_MANAGEMENT_TYPE: oidc
    
    INFRASTRUCTURE_PROVIDER: vsphere
    SERVICE_CIDR: 100.64.0.0/13
    TKG_HTTP_PROXY_ENABLED: false
    DEPLOY_TKG_ON_VSPHERE7: 'true'
    VSPHERE_DATACENTER: /SDDC-Datacenter
    VSPHERE_DATASTORE: WorkloadDatastore
    VSPHERE_FOLDER: /SDDC-Datacenter/vm/tkg-vmc-workload
    VSPHERE_NETWORK: /SDDC-Datacenter/network/tkgvmc-workload-segment01
    VSPHERE_PASSWORD: <encoded:T1V3WXpkbStlLUlDOTBG>
    VSPHERE_RESOURCE_POOL: /SDDC-Datacenter/host/Cluster-1/Resources/Compute-ResourcePool/Tanzu/tkg-vmc-workload
    VSPHERE_SERVER: 10.97.1.196
    VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa....loudadmin@vmc.local
    
    VSPHERE_USERNAME: cloudadmin@vmc.local
    WORKER_MACHINE_COUNT: 3
    VSPHERE_INSECURE: 'true'
    ENABLE_AUDIT_LOGGING: 'true'
    ENABLE_DEFAULT_STORAGE_CLASS: 'true'
    ENABLE_AUTOSCALER: false
    AVI_CONTROL_PLANE_HA_PROVIDER: 'true'
    OS_ARCH: amd64
    OS_NAME: photon
    OS_VERSION: 3
    
    WORKER_SIZE: small
    CONTROLPLANE_SIZE: large
    REMOVE_CP_TAINT: "true"
    

    if your cluster yaml file is correct, you should see that new windows cluster has been started to deploy.

    and after some time if should deploy cluster sucessfully.

    In case if you are using NSX-ALB AKO or Pinniped and see that those pods are not running, please refer Here

    NOTE – if you see this error during image build process : Permission denied: ‘./packer/ova/windows/windows-2019/autounattend.xml, check the permission of file autounattend.yaml

  • Cloud Director OIDC Configuration using OKTA IDP

    OpenID Connect (OIDC) is an industry-standard authentication layer built on top of the OAuth 2.0 authorization protocol. The OAuth 2.0 protocol provides security through scoped access tokens, and OIDC provides user authentication and single sign-on (SSO) functionality. For more refer here (https://datatracker.ietf.org/doc/html/rfc6749). There are two main types of authentication that you can perform with Okta:

    • The OAuth 2.0 protocol controls authorization to access a protected resource, like your web app, native app, or API service.
    • The OpenID Connect (OIDC) protocol is built on the OAuth 2.0 protocol and helps authenticate users and convey information about them. It’s also more opinionated than plain OAuth 2.0, for example in its scope definitions.

    So If you want to import users and groups from an OpenID Connect (OIDC) identity provider to your Cloud Director system (provider) or Tenant organization, you must configure provider/tenant organization with this OIDC identity provider. Imported users can log in to the system/tenant organization with the credentials established in the OIDC identity provider.

    We can use VMware Workspace ONE Access (VIDM) or any public identity providers, but make sure OAuth authentication endpoint must be reachable from the VMware Cloud Director cells.in this blog post we will use OKTA OIDC and configure VMware Cloud to use this OIDC for authentication.

    Step:1 – Configure OKTA OIDC

    For this blog post, i created an developer account on OKTA at this url –https://developer.okta.com/signup and once account is ready, follow below steps to add cloud director as an application in OKTA console:

    • In the Admin Console, go to Applications > Applications.
    • Click Create App Integration.
    • To create an OIDC app integration, select OIDC – OpenID Connect as the Sign-in method.
    • Choose what type of application you plan to integrate with Okta, in Cloud Director case Select Web Application.
    • App integration name: Specify a name for Cloud Director
    • Logo (Optional): Add a logo to accompany your app integration in the Okta org
    • Grant type: Select from the different grant type options
    • Sign-in redirect URIs: The Sign-in redirect URI is where Okta sends the authentication response and ID token for the sign-in request, in our case for provider https://<vcd url>/login/oauth?service=provider and incase if you are doing it for tenant then use https://<vcd url>/login/oauth?service=tenant:<org name>
    • Sign-out redirect URIs: After your application contacts Okta to close the user session, Okta redirects the user to this URI.
    • AssignmentsControlled access: The default access option assigns and grants login access to this new app integration for everyone in your Okta org or you can choose to Limit access to selected groups

    Click Save. This action creates the app integration and opens the settings page to configure additional options.

    The Client Credentials section has the Client ID and Client secret values for Cloud Director integration, Copy both the values as we enter these in Cloud Director.

    The General Settings section has the Okta Domain, for Cloud Director integration, Copy this value as we enter these in Cloud Director.

    Step:2 – Cloud Director OIDC Configuration

    Now I am going to configure OIDC authentication for provider side of cloud provider and with very minor changes (tenant URL) it can be configured for tenants too.

    Let’s go to Cloud Director and from the top navigation bar, select Administration and on the left panel, under Identity Providers, click OIDC and click CONFIGURE

    General: Make sure that OpenID Connect  Status is active, and enter the client ID and client secret information from the OKTA App registration which we captured above.

    To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL, for OKTA the URL would look this – https://<domain.okta.com>/.well-known/openid-configuration and click on NEXT

    Endpoint: Clicking on NEXT will populate “Endpoint” information automatically, it is however, essential that the information is reviewed and confirmed. 

    Scopes: VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information.enter the scope information, and click Next.

    Claims: You can use this section to map the information VMware Cloud Director gets from the user info endpoint to specific claims. The claims are strings for the field names in the VMware Cloud Director response

    This is the most critical piece of configuration. Mapping of this information is essential for VCD to interpret the token/user information correctly during the login process.

    For OKTA developer account, user name is email id, so i am mapping Subject to email as below

    Key Configuration:

    OIDC uses a public key cryptography mechanism.A private key is used by the OIDC provider to sign the JWT Token and it can be verified by a 3rd party using the public keys published on the OIDC provider’s well-known URL.These keys form the basis of security between the parties. For security to be maintained, this is required to keep the private keys protected from any cyber-attacks.One of the best practices that has been identified to secure the keys from being compromised is known as key rollover or key Refresh.

    From VMware Cloud Director 10.3.2 and above, if you want VMware Cloud Director to automatically refresh the OIDC key configurations, turn on the Automatic Key Refresh toggle.

    • Key Refresh Endpoint should get populated automatically as we choose auto discovery.
    • Select a Key Refresh Strategy.
      • AddPreferred option, add the incoming set of keys to the existing set of keys. All keys in the merged set are valid and usable.
      • Replace – Replace the existing set of keys with the incoming set of keys.
      • Expire After – You can configure an overlap period between the existing and incoming sets of keys. You can configure the overlapping time using the Expire Key After Period, which you can set in hourly increments from 1 hour up to 1 day.

    If you did not use Configuration Discovery in Step 6, upload the private key that the identity provider uses to sign its tokens and click on SAVE

    Now go to Cloud Director, under Users, Click on IMPORT USERS and choose Source as “OIDC” and add user which is there in OKTA and Assign Role to that user, thats it.

    Now you can logout from the vCD console and try to login again, Cloud Director automatically redirects to OKTA and asks for credential to validate.

    Once the user is authenticated by Okta, they will be redirected back to VCD and granted access per rights associated with the role that was assigned when the user was provisioned.

    Verify that the Last Run and the Last Successful Run are identical. The runs start at the beginning of the hour. The Last Run is the time stamp of the last key refresh attempt. The Last Successful Run is the time stamp of the last successful key refresh. If the time stamps are different, the automatic key refresh is failing and you can diagnose the problem by reviewing the audit events. (This is only applicable if Automatic Key Refresh is enabled. Otherwise, these values are meaningless)

    Bring on your Own OIDC – Tenant Configuration

    For tenant configuration, i have created a video, please take a look here, Tenant can bring their own OIDC and self service in cloud director tenant portal.

    This concludes the OIDC configuration with VMware Cloud Director. I would like to Thank my colleague Ankit Shah, for his guidance and review of this document.

  • Tanzu Service on VMware Cloud on AWS – Installing Tanzu Application Platform

    VMware Tanzu Application Platform is a modular, application detecting platform that provides a rich set of developer tools and a paved path to production to build and deploy software quickly and securely on any compliant public cloud or on-premises Kubernetes cluster.

    Tanzu Application Platform delivers a superior developer experience for enterprises building and deploying cloud-native applications on Kubernetes. It enables application teams to get to production faster by automating source-to-production pipelines. It clearly defines the roles of developers and operators so they can work collaboratively and integrate their efforts.

    Operations teams can create application scaffolding templates with built-in security and compliance guardrails, making those considerations mostly invisible to developers. Starting with the templates, developers turn source code into a container and get a URL to test their app in minutes.

    Pre-requisite

    1. You should have created an account on Tanzu Network to download Tanzu Application Platform packages.
    2. Servers should have Network access to https://registry.tanzu.vmware.com
    3. A container image registry and access from K8s cluster, in my case i have installed “Harbor” with let’s encrypt certificate.
    4. Registry credentials with read and write access made available to Tanzu Application Platform to store images.
    5. Git repository for the Tanzu Application Platform GUI’s software catalogs, along with a token allowing read access.

    Kubernetes cluster requirements

    Installation requires Kubernetes cluster v1.20, v1.21, or v1.22 on Tanzu Kubernetes Grid Service on VMware Cloud on VMC as well as pod security policies must be configured so that Tanzu Application Platform controller pods can run as root. To set the pod security policies, run:

    #kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

    Install Cluster Essentials for VMware Tanzu

    The Cluster Essentials for VMware Tanzu package simplifies the process of installing the open-source Carvel tools on your cluster. It includes a script that uses the Carvel CLI tools to download and install the server-side components kapp-controller and secretgen-crontroller on the targeted cluster. Currently, only MacOS and Linux are supported for Cluster Essentials for VMware Tanzu.

    • Sign in to Tanzu Network.
    • Navigate to Cluster Essentials for VMware Tanzu on VMware Tanzu Network.
    • on Linux, download tanzu-cluster-essentials-linux-amd64-1.0.0.tgz.
    • Unpack the TAR file into the tanzu-cluster-essentials directory by running:
    #mkdir $HOME/tanzu-cluster-essentials
    #tar -xvf tanzu-cluster-essentials-linux-amd64-1.0.0.tgz -C $HOME/tanzu-cluster-essentials
    
    • Configure and run install.sh using below commands:
    #export INSTALL_BUNDLE=registry.tanzu.vmware.com/tanzu-cluster-essentials/cluster-essentials-bundle@sha256:82dfaf70656b54dcba0d4def85ccae1578ff27054e7533d08320244af7fb0343
    #export INSTALL_REGISTRY_HOSTNAME=registry.tanzu.vmware.com
    #export INSTALL_REGISTRY_USERNAME=TANZU-NET-USER Name
    #export INSTALL_REGISTRY_PASSWORD=TANZU-NET-USER PASSWORD
    #cd $HOME/tanzu-cluster-essentials
    #./install.sh

    now Install kapp & imgpkg CLI onto your $PATH using below commands:

    sudo cp $HOME/tanzu-cluster-essentials/kapp /usr/local/bin/kapp
    sudo cp $HOME/tanzu-cluster-essentials/imgpkg /usr/local/bin/imgpkg

    For Linux Client VM: Install the Tanzu CLI and Plugins

    To install the Tanzu Tanzu command line interface (CLI) on a Linux operating system, Create a directory named Tanzu and download tanzu-framework-bundle-linux from Tanzu Net and unpack the TAR file into the Tanzu directory and install using below commands:

    #mkdir $HOME/tanzu 
    #tar -xvf tanzu-framework-linux-amd64.tar -C $HOME/tanzu
    #export TANZU_CLI_NO_INIT=true
    #cd $HOME/tanzu 
    #sudo install cli/core/v0.11.1/tanzu-core-linux_amd64 /usr/local/bin/tanzu
    #tanzu version
    #cd $HOME/tanzu
    #tanzu plugin install --local cli all
    #tanzu plugin list
    

    Ensure that you have the acceleratorappspackagesecret, and services plug-ins installed. You need these plug-ins to install and interact with the Tanzu Application Platform.

    Installing the Tanzu Application Platform Package and Profiles

    VMware recommends install of Tanzu Application Platform packages by relocating the images to your registry from VMware Tanzu Network registry, this will ease the deployment process, so lets do it by logging in to Tanzu Net Registry, setting some env variables and relocate images.

    #docker login registry.tanzu.vmware.com
    #export INSTALL_REGISTRY_USERNAME=MY-REGISTRY-USER
    #export INSTALL_REGISTRY_PASSWORD=MY-REGISTRY-PASSWORD
    #export INSTALL_REGISTRY_HOSTNAME=MY-REGISTRY
    #export TAP_VERSION=VERSION-NUMBER
    #imgpkg copy -b registry.tanzu.vmware.com/tanzu-application-platform/tap-packages:1.0.2 --to-repo ${INSTALL_REGISTRY_HOSTNAME}/TARGET-REPOSITORY/tap-packages

    This completes the download and upload on images to local registry.

    Create a registry secret by running below command:

    #tanzu secret registry add tap-registry \
      --username ${INSTALL_REGISTRY_USERNAME} --password ${INSTALL_REGISTRY_PASSWORD} \
      --server ${INSTALL_REGISTRY_HOSTNAME} \
      --export-to-all-namespaces --yes --namespace tap-install

    Add the Tanzu Application Platform package repository to the cluster by running:

    #tanzu package repository add tanzu-tap-repository \
      --url ${INSTALL_REGISTRY_HOSTNAME}/TARGET-REPOSITORY/tap-packages:$TAP_VERSION \
      --namespace tap-install

    Get the status of the Tanzu Application Platform package repository, and ensure the status updates to Reconcile succeeded by running:

    #tanzu package repository get tanzu-tap-repository --namespace tap-install

    Tanzu Application Platform profile

    The tap.tanzu.vmware.com package installs predefined sets of packages based on your profile settings. This is done by using the package manager you installed using Tanzu Cluster Essentials.Here is my full profile sample file:

    buildservice:
      descriptor_name: full
      enable_automatic_dependency_updates: true
      kp_default_repository: harbor.tkgsvmc.net/tbs/build-service
      kp_default_repository_password: <password>
      kp_default_repository_username: admin
      tanzunet_password: <password>
      tanzunet_username: tripathiavni@vmware.com
    ceip_policy_disclosed: true
    cnrs:
      domain_name: tap01.tkgsvmc.net
    grype:
      namespace: default
      targetImagePullSecret: tap-registry
    learningcenter:
      ingressDomain: learningcenter.tkgsvmc.net
    metadata_store:
      app_service_type: LoadBalancer
    ootb_supply_chain_basic:
      gitops:
        ssh_secret: ""
      registry:
        repository: tap
        server: harbor.tkgsvmc.net/tap
    profile: full
    supply_chain: basic
    tap_gui:
      app_config:
        app:
          baseUrl: http://tap-gui.tap01.tkgsvmc.net
        backend:
          baseUrl: http://tap-gui.tap01.tkgsvmc.net
          cors:
            origin: http://tap-gui.tap01.tkgsvmc.net
        catalog:
          locations:
            - target: https://github.com/avnish80/tap/blob/main/catalog-info.yaml
              type: url
      ingressDomain: tap01.tkgsvmc.net
      ingressEnabled: "true"
      service_type: LoadBalancer

    save this file with modified values as per your environment, for more details about details of settings, check Here.

    Install Tanzu Application Platform

    finally lets install TAP, to install the Tanzu Application Platform package run below commands:

    #tanzu package install tap -p tap.tanzu.vmware.com -v $TAP_VERSION --values-file tap-values.yml -n tap-install

    to verify the packages installed, you can go to TMC and check there

    or you an run below command to verify too

    #tanzu package installed get tap -n tap-install

    This completes the installation of Tanzu Application platform, now developer can: Develop and promote an application, Create an application accelerator, Add testing and security scanning to an application, Administer, set up, and manage supply chains.

  • Tanzu Service on VMware Cloud on AWS – Kubernetes Cluster Operations

    Tanzu Kubernetes Grid is a managed service offered by VMware Cloud on AWS. Activate Tanzu Kubernetes Grid in one or more SDDC clusters to configure Tanzu support in the SDDC vCenter Server.In my previous post (Getting Started with Tanzu Service on VMware Cloud on AWS),in this i walked you through how to enable Tanzu Service on VMware Cloud on AWS.

    In this post i will deploy Tanzu Kubernetes Cluster by GUI (from Tanzu Mission Control) and as well as CLI but this CLI is updated API V2 version, so lets get started.

    Deploy Tanzu Kubernetes Cluster using Tanzu Mission Control

    Go to Tanzu Mission Control and validate that VMC supervisor cluster is registered and healthy by going to Tanzu Mission Control, Click on Administration, to go “management cluster” and check the status

    Now on Tanzu Mission Control, click on “Clusters” and then click on “CREATE CLUSTER”

    Select your VMC Tanzu Management Cluster and click on “CONTINUE TO CREATE CLUSTER”

    on the next screen choose “Provisioner” (namespace name”). you add a provisioner by creating a vSphere namespace in the Supervisor Cluster, which you can do in VMC vCenter.

    Next is select Kubernetes Version, latest supported version is preselected for you, Pod CIDR, and Service CIDR. You can also optionally select the default storage class for the cluster and allowed storage classes.The list of storage classes that you can choose from is taken from your vSphere namespace.

    Select the type of cluster you want to create. the primary difference between the two is that the highly available cluster is deployed with multiple control plane nodes.

    You can optionally select a different instance type for the cluster’s control plane node and its storage class as well as you can optionally additional storage volumes for your control plane.

    To configure additional volumes, click Add Volume and then specify the name, mount path, and capacity for the volume. To add another, click Add Volume again.

    Next is you can define the default node pool and create additional node pools for your cluster. specify the number of worker nodes to provision also select the instance type for workload clusters and select the storage class

    When you ready to provision the new cluster, click Create Cluster and wait for few minutes

    you can also view vCenter activities about creation of Tanzu Kubernetes cluster.

    Once the cluster is fully created and TMC agent reported back, you should see below status on TMC console, which shows that cluster has been successfully created.

    This complates Tanzu Kubernetes Cluster deployment using GUI.

    Deploy Tanzu Kubernetes Grid Service using v1alpha2 API yaml

    The Tanzu Kubernetes Grid Service v1alpha2 API provides a robust set of enhancements for provisioning Tanzu Kubernetes clusters. there is an YAML specification which i am using for provisioning a Tanzu Kubernetes Cluster Using the Tanzu Kubernetes Grid Service v1alpha2 API

    apiVersion: run.tanzu.vmware.com/v1alpha2
    kind: TanzuKubernetesCluster
    metadata:
      name: tkgsv2
      namespace: wwmca
    spec:
      topology:
        controlPlane:
          replicas: 1
          vmClass: guaranteed-medium
          storageClass: vmc-workload-storage-policy-cluster-1
          volumes:
            - name: etcd
              mountPath: /var/lib/etcd
              capacity:
                storage: 4Gi
          tkr:  
            reference:
              name: v1.21.2---vmware.1-tkg.1.ee25d55
        nodePools:
        - name: worker-nodepool-a1
          replicas: 2
          vmClass: best-effort-large
          storageClass: vmc-workload-storage-policy-cluster-1
          tkr:  
            reference:
              name: v1.21.2---vmware.1-tkg.1.ee25d55
      settings:
        storage:
          defaultClass: vmc-workload-storage-policy-cluster-1
        network:
          cni:
            name: antrea
          services:
            cidrBlocks: ["198.53.100.0/16"]
          pods:
            cidrBlocks: ["192.0.5.0/16"]
          serviceDomain: managedcluster.local
          trust:
            additionalTrustedCAs:
              - name: CompanyInternalCA-1
                data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlG

    Two key parameters which i am using for cluster provistioning

    • #tkr.reference.name is the TKR NAME #to be used by control plane nodes; supported format is “v1.21.2—vmware.1-tkg.1.ee25d55”
    • #trust configures additional certificates for the cluster #if omitted no additional certificate is configured

    You can run below command to check the status of cluster provustioning:

    #kubectl get tkc

    Scale a Tanzu Kubernetes cluster

    Publish the service Internally/Externally

    Before we can make our service available over the Internet, it should be accessible from within the VMware Cloud on AWS instance. Platform operators can publish applications through a Kubernetes Service of type LoadBalancer. This ability is made possible through the NSX-T Container Plugin (NCP) functionality built into Tanzu Kubernetes Grid. lets deploy a basic container and exposed it as type “LoadBalancer”

    #kubectl run nginx1 --image=nginx
    #kubectl expose pod nginx1 --type=LoadBalancer --port=80

    Now you can access the application internally by accessing internal

    Access application from Internet

    To make it publicly available, we must assign a public IP address, and configure a Destination NAT, let do it request an Public IP on VMC console and create a NAT rule on Internet Tab to access the application from internet.

    Now access the application from Internet and you should be able to successfully access it using provided public ip.

    Exposing a Kubernetes service to the Internet takes a couple of more steps to complete than exposing it to your internal networks, but the VMware Cloud Console makes those steps simple enough. After exposing the Kubernetes service using an NSX-T Load Balancer, you can request a new Public IP Address and then configure a NAT rule to send that traffic to the virtual IP address of the load balancer.