Upgrade Postgres SQL 9 to 10 for vCloud Director

Since vCloud Director 9.7 has dropped support for Postgres SQL9.5 , so i had to upgrade my postgres to 10 , then i have updated my vCloud Director to versions 9.7 , i followed below steps to upgrade the DB , basically at High level steps are as below:

  • You need to backup the existing database and data directory.
  • Uninstall old version of Postgres SQL.
  • Install Postgres10
  • Restore Backup

Procedure

  • Create database backup using:
    • su – postgres
    • pg_dumpall > /tmp/pg9dbbackup
    • exit
    • 1
  • Check and Stop the service using
    • #chkconfig
    • #service postgresql-9.5 stop
    • 2
  • Move current data file as .old to /tmp directory using below command.
    • #mv /var/lib/pgsql/9.5/data/ /tmp/data.old
  • Uninstall 9.5 version of Postgres SQL using :
    • yum remove postgresql*
  • Install  PostgreSQL v10:
  • Initialise the database
    • service postgresql-10 initdb
    • as suggested by my friend miguel if above step is not working then use this (/usr/pgsql-10/bin/postgresql-10-setup initdb)
  • Copy the pg_hba.conf and postgresql.conf from old backed up directory to new directory , this will save some time or you can go ahead and edit existing files with required settings.
    • cp /data.old/pg_hba.conf /var/lib/pgsql/10/data/
    • cp /data.old/postgresql.conf /var/lib/pgsql/10/data/
    • service postgresql-10 start
  • Restore backup using below commands:
    • su – postgres
    • psql -d postgres -f /tmp/pg9dbbackup

you can run the reconfigure-database command and that’s it. (change your environment variable accordingly)

7.png

This will complete the database upgrade and database migration procedure.

 

 

 

Advertisements

Deploy VMware PKS – Part2

In this part I will begin PKS installation by deploying Pivotal Ops Manager which basically provides a management interface (UI/API) for Platform Operators to manage the complete lifecycle of both BOSH and PKS starting from install then going to patch and upgrade.

To refer other posts of this series are here:

Getting Started with VMware PKS & NSX-T

Deploy VMware PKS – Part1

In addition, you can also deploy new application services using Ops Manager Tiles like adding an Enterprise-class Container Registry like VMware Harbor which can then be configured to work with PKS.

Installing OpsManager

2.png

  • Once Downloaded , Log into vCenter using the vSphere Web Client or HTML5 Client to deploy the Ops Manager OVA.
  • Choose your Management cluster , appropriate network and other OVA deployment options , i am not going to cover OVA deployment procedure here. Only at customize template , enter below details:
    • Admin Password: A default password for the user “ubuntu”.
      • If you do not enter a password, Ops Manager will not boot up.
    • Custom hostname: The hostname for the Ops Manager VM, in My example opsmgr.corp.local.
    • DNS: One or more DNS servers for the Ops Manager VM.
    • Default Gateway: The default gateway for Ops Manager.
    • IP Address: The IP address of the Ops Manager network interface.
    • NTP Server: The IP address of one or more NTP servers for Ops Manager.
    • Netmask: The network mask for Ops Manager.1.png
  • Create a DNS entry for the IP address that you used for Ops Manager ,which we will use in next steps. use this DNS NAME/IP address and browse on a browser , which will take you to Authentication System for initial authentication setup and for our setup, i will use “Internal Authentication” for this Lab. Click on “Internal Authentication”
    • 2
  • Next, you will be prompted to create a new admin user which we will use to manage BOSH. Once you have successfully created the user, go ahead and login with the new user account
    • 3.png
  • Once you are logged into Ops Manager, you can see that the BOSH Tile is already there but is showing as un-configured (orange colour denotes un-configured) which means BOSH has not yet been deployed yet. Go ahead and click on the tile to begin the configuration to deploy BOSH.
    • 4.png

Before starting Bosh Tile configuration , we need to prepare NSX Manager, listed below procedure for:

Generating and Registering the NSX Manager Certificate for PKS

The NSX Manager CA certificate is used to authenticate PKS with NSX Manager. You create an IP-based, self-signed certificate and register it with the NSX Manager.By default, the NSX Manager includes a self-signed API certificate with its hostname as the subject and issuer. PKS Ops Manager requires strict certificate validation and expects the subject and issuer of the self-signed certificate to be either the IP address or fully qualified domain name (FQDN) of the NSX Manager. That’s the reason, we need to regenerate the self-signed certificate using the FQDN of the NSX Manager in the subject and issuer field and then register the certificate with the NSX Manager using the NSX API.

  • Create a file for the certificate request parameters named “nsx-cert.cnf” in a linux VM where openssl tool is installed.
  • Write below content in to the file which we create in above step.
    • 3.png
  • Export the NSX_MANAGER_IP_ADDRESS and NSX_MANAGER_COMMONNAME environment variables using the IP address of your NSX Manager and the FQDN of the NSX Manager host.
    • 4.png
  • Using openssl tool generate the certificate by running below command:
    • 56
  • Verify the certificate by running command as below:
    • ~$ openssl x509 -in nsx.crt -text -noout and ensure SAN has DNS name and IP addresses.
    • 7.png
  • Import this Certificate in NSX Manager , go to System -> Trust -> Certificates and click on Import -> Import Certificate
    • 8.png
  • Ensure that Certificate looks like this in your NSX Manager.
    • 9.png
  • Next is to Register the certificate with NSX Manager using below procedure , first the ID of the certificate from gui.
    • 10.png
  • Run the below command (in to your API client ) to register the certificate using below command replace “CERTIFICATE-ID” with your certificate ID.

Now let’s configure BOSH tile , which will deploy BOSH based on our input parameters.

Configure BOSH Tile to Deploy BOSH Director

Click on the tile. It will open the tile’s setting tab with the vCenter Config parameters page.

  • vCenter Config 
    • Name: a unique meaning full name
    • vCenter Host: The hostname of the vCenter.
    • vCenter Username: Username for above VC with create and delete privileges for virtual machines (VMs) and folders.
    • vCenter Password: the password of above VC.
    • Datacenter Name: Exact name of data center object in vCenter
    • Virtual Disk Type: Select “Thin” or “Thick”
    • Ephemeral Datastore Names: The names of the data stores that store ephemeral VM disks deployed by Ops Manager , you can specify many data stores by using comma.
    • Persistent Datastore Names (comma delimited): The names of the datastores that store persistent VM disks deployed by Ops Manager.
    • To deploy BOSH as well as the PKS Control Plane VMs, Ops Manager will go ahead and upload a Stemcell VM ( A VM Template that PKS) and it will clone from that image for both PKS Management VMs as well as base K8S VMs.
    • 11
  • NSX-T Config 
    • Choose NSX Networking and Select NSX-T 
    • NSX Address: IP/DNS Name for NSX-T Manager.
    • NSX Username and NSX Password: NSX-T credential
    • NSX CA Cert: Open the NSX CA Cert that you generated in above section and copy/paste its content to this field.
    • 12.png
  •  Other Config
    • VM Folder: The vSphere datacenter folder where Ops Manager places VMs.
    • Template Folder: The vSphere folder where Ops Manager places stemcells(templates).
    • Disk path Folder: The vSphere datastore folder where Ops Manager creates attached disk images. You must not nest this folder.
    • And Click on “Save”.
    • 13
  • Director Config
    • For Director config , i have put in few details like:
      • NTP Server Details
      • Enable VM Resurrector Plugin
      • Enable Post Deploy Scripts
      • Recreate all VMs (optional)
    • 14
  • Availability Zone    
    • Availability zones are defined at a vSphere Cluster level. These AZs will then be used by BOSH director to determine where to deploy the PKS Management VMs as well as the Kubernetes VMs.Multiple Availability Zones allow you to provide high-availability across data centers. for this demonstration i have created two AZs, one for Management and one for my Compute.15.png
  • Create Network
    • Since i am using dvs for my PKS management component , we need to specify those details in to this segment and make sure you select the Management AZ which is the vSphere Cluster where BOSH and PKS Control Plane VM will be deployed.

16

  • Assign AZs and Networks 
    • In this section,  Define the AZ and networking placement settings for the PKS Management VM.Singleton Availability Zone – The Ops Manager Director installs in this Availability Zone.

17

  • Security & Syslog
    • This section i am leaving default , if required for your deployment , pls refer documentation.
  • Resource Config
    • As per in part 1 sizing , the BOSH director vm by default allocates 2 vCPUs, 8GB memory, 64GB disk and also has a persistent disk of 50GB and Each of the four Compilation VMs uses 4 vCPUs, 4GB memory, 16GB disk each. For my lab deployment i have changed it to suite to my lab resources.Bosh director needs minimum of 8 GB Memory to run , so choose options accordingly.

18.png

  • Review Pending Changes and Apply Changes 
    • With all the input completed, return to the Installation Dashboard view by clicking Installation Dashboard at the top of the window. The BOSH Director tile will now have a green bar indicating all the required parameters have been entered. Next we click REVIEW PENDING CHANGES and Apply Changes

22

  • Monitor Installation and Finish
    • 8.png
    • If all the inputs are right then you will see that your installation is successful

9.png

After you login to your vCenter , you will see a new powered on VM in your  inventory that starts with vm-UUID which is the BOSH VM. Ops Manager uses vSphere Custom Attributes to add additional metadata fields to identify the various VMs it can deploy, you can check what type of VM this is by simply looking at the deployment, instance_group or job property. In this case, we can see its been noted as p-bosh.

24.png

and this completes Ops Manager and BOSH deployment , next post we will install PKS tile.

 

 

Deploy VMware PKS – Part1

In Continuation of my previous blog post here , where i have explained PKS component and sizing details , in this post i will be covering PKS component deployment.

Previous Post in this Series:

Getting Started with VMware PKS & NSX-T

Pre-requisite:

  • Install a New or Existing server which has DNS role installed and configured , which we will use in this deployment.
  • Install vCenter and ESXi , for this Lab i have created two vSphere Cluster:
    • Management Cluster + Edge Cluster – Three Nodes
    • Compute Cluster – Two Nodes
  • Create a Ubuntu server , where you will need to install client utilities like:
    • PKS CLI
      • The PKS CLI is used to create, manage, and delete Kubernetes clusters.
    • KUBECTL
      • To deploy workloads/applications to a Kubernetes cluster created using the PKS CLI, use the Kubernetes CLI called “kubectl“.
    • UAAC
      • To manage users in Pivotal Container Service (PKS) with User Account and Authentication (UAA). Create and manage users in UAA with the UAA Command Line Interface (UAAC).

    • BOSH
      • BOSH CLI used to manage PKS management components deployments and provides information about the VMs using its Cloud Provider Interface (CPI) which is vSphere in my Lab and could be AZURE , AWS and GCP also.
    • OM
      • Bosh Operations Manager command line interface.
  • Prepare NSX-T

    For this Deployment make sure NSX-T is deployed and configured, high level steps are as below:

    • Install NSX Manager
    • Deploy NSX Controllers
    • Register Controllers with Managers as well as other controller with Master controller.
    • Deploy NSX Edge Nodes

    • Register NSX Edge Nodes with NSX Manager

    • Enable Repository Service on NSX Manager

    • Create TEP IP Pool

    • Create Overlay Transport Zone

    • Create VLAN Transport Zone

    • Create Edge Transport Nodes

    • Create Edge Cluster

    • Create T0 Logical Router and configure BGP routing with physical device

    • Configure Edge Nodes for HA

    • Prepare ESXi Servers for the PKS Compute Cluster

My PKS deployment topology is look like below:

8.png

  • PKS Deployment Topology – PKS management stack running out of NSX-T
    • PKS VMs (Ops Manager, BOSH, PKS Control Plane, Harbor) are deployed to a VDS backed portgroup
    • Connectivity between PKS VMs, K8S Cluster Management and T0 Router is through a physical router
    • NAT is only configured on T0 to provide POD networks access to associated K8S Cluster Namespace
  • Create a IP Pool
    • Create a new IP Pool which will be used to allocate Virtual IPs for the exposed K8S Services The network also provides IP addresses for Kubernetes API access. Go to Inventory->Groups->IP Pool and enter the following configuration:
      • Name: PKS-FLOATING-POOL
      • IP Range: 172.26.0.100 – 172.26.255.254
      • CIDR: 172.26.0.0/16
    • 5.png
  • Create POD-IP-BLOCK
    • We need to create a new POD IP Block which will by used by PKS on-demand to create smaller /24 networks and assigned those to each K8S namespace. This IP block should be sized sufficiently to ensure that you do not run out of addresses. To create POD-IP-BLOCK , go to NETWORKING->IPAM and enter the following:
    • 7
  •  Create NODEs-IP-BLOCK
    • We need to create new NODEs IP Block which will be used by PKS to assign IP address to Kubernetes master and worker nodes.Each Kubernetes cluster owns the /24 subnet , so to deploy multiple Kubernetes clusters, plan for larger than /24 subnet. (recommendation is /16)
    • 6

Prepare Client VM

  • Create and install a small Ubuntu VM with default configuration. you can use the latest server version and insure that the VM has internet connectivity either by proxy or direct.
  • Once the Ubuntu VM is ready , download PKSCLI and KUBECTL from https://network.pivotal.io/products/pivotal-container-service

2

and copy both the PKS (pks-linux-amd64-1.3.0-build.126 or latest) and Kubectl                      (kubectl-linux-amd64-1.12.4 or latest) CLI to VM.

  • Now SSH to the Ubuntu VM and run the following commands to make binaries executable and renaming/relocating them to /usr/local/bin directory:
    • chmod +x pks-linux-amd64-1.3.0-build.126
    • chmod +x kubectl-linux-amd64-1.12.4
    • mv pks-linux-amd64-1.3.0-build.126 /usr/local/bin/pks
    • mv kubectl-linux-amd64-1.12.4 /usr/local/bin/kubectl
    • Check version using – pks -v and kubectl version
  • Next is to install Cloud Foundry UAAC , run this command
    • apt -y install ruby ruby-dev gcc build-essential g++
    • gem install cf-uaac
    • Check version using – uaac -v
  • Next is to install
  • 34

This completes this part , in the next part we will start deploying PKS management VMs and their configuration.

 

VMware Container Service Extension Upgrade

With the release of new Container Service Extension (CSE) version 1.2.7 due to vulnerability related to docker (CVE-2019-5736 ) for both Ubuntu and Photon OS templates , it is very important to update the CSE ASAP , here is the procedure to help you to upgrade the CSE easily.

Pre-requisite:

  • Check the release notes Here for version compatibility.

Upgrade procedure for Cloud Admins:

  • Update CSE to 1.2.7 ( follow procedure below)
  • Update the templates (follow procedure below)

Upgrading CSE Server Software

  •  Stop CSE Server services gracefully.
    • #vcd cse system stop -y
    • 2.png
  • Reinstall container-service-extension using Python Package Index:
    • #pip3 install –user –upgrade container-service-extension
    • 3.png
  • Review the configuration file for any new options introduced or deprecated in the new version. cse sample  can be used to generate a new sample config file as well.
    • 3.png
    • Follow the steps listed here , to edit your environment variable for CSE to use.
  • If the previously generated templates are no longer supported by the new version, delete the templates and re-generate new ones using below command.
    • cse install -c mysample.yaml –update
    • 12
  • If running CSE as a service, start the new version of the service with
    • $systemctl start cse
    • 4.png

Upgrade procedure for Tenant Users:

  • Delete clusters that were created with older templates. Recreate clusters with new templates
  • Alternatively, tenant-users can update docker version manually on their existing clusters.

This completes the upgrade procedure , go ahead and let the customer consume Kubernetes as a Service from your platform.

VMware CSE Upgrade Error – Missing keys in config file ‘service’ section: {‘enforce_authorization’}

Trying to upgrade CSE to latest version of CSE 1.2.7 and during upgrade process facing error , like this: Missing keys in config file ‘service’ section: {‘enforce_authorization’}

error.png

with this new release there are many new options has been added in to configuration file considering PKS integration , so to resolve this issue , there are two options:

  • Create a new sample config.yaml file using command:
    • cse sample > myconfig.yaml  – and reconfigure it.
  • If don’t need PKS integration as of now and edit the existing config.yaml file and add “enforce_authorization: false” in to service section
    • 7.png

and once you done the changes , re-run the command and it should now successfully complete the process.

8.png

this new process has not been documented properly in the CSE git page 🙂

 

VMware Container Service Extension Installation – Part-1

In continuation of my last post on Kubernetes as a service on vCloud Director , here is the next post on installation of Container Server Extension on vCloud Director.

This post applies to CSE version 1.2.5

CSE Installation

This installation procedure applies to Client VM as well as CSE Server VM. For this installation i will leverage a Photon OS 2.0 VM based on the official OVA which is available here. deploy OVA following the standard OVA deployment procedure.Once deployed, make sure you configure static IP and configure networking correctly based on your environment and ensure that this machine can reach to internet to download necessary binaries.

Configure Static IP on Photon OS

Edit file 99-dhcp-en.network inside directory /etc/systemd/network  and change as below.

IP.png

By default ping is disabled on this , so open firewall using below commands:

fw.png

Now Install Python related binaries using below command:

root@photon-machine [ ~ ]# tdnf install -y build-essential python3-setuptools python3-tools python3-pip python3-devel

root@photon-machine [ ~ ]# pip3 install –upgrade pip (double dash –)

Install CSE Software:

Now install and verify the installation CSE:

root@photon-machine [ ~ ]# pip3 install container-service-extension

version.png

This completes installation of CSE , now we need to enable CSE client on this VM.

Enable CSE Client:

Go and edit ~/.vcd-cli/profiles.yaml  file to include this section: (exactly like in Image)

yaml.png

vCD Prerequisites:

There are many important requirements that must be fulfilled to install CSE successfully on vCD.

  • Catalog Organization creation:
  • Create a VDC within the org that has an external org network in which vApps may be instantiated and sufficient storage to create vApps and publish them as templates. The external network connection is required to enable template VMs to download packages during configuration. The process as follows:
    • CSE server will upload base OS image to vCloud Director in a CSE Catalog
    • CSE server will deploy the template as a VM on a Org VDC Network that requires internet access and will download and install required kubernetes and docker binaries.
    • CSE will then validate the VM and capture as vApp template and add it back to the CSE Catalog as a valid item for deploying container hosts.
  • Create a user in the org with privileges necessary to perform operations like configuring AMQP, creating public catalog entries, and managing vApps.
  • A good network connection from the host running CSE installation to vCD as well as the Internet. This avoids intermittent failures in OVA upload/download operations.

CSE Server Config File:

The CSE server is controlled by a yaml configuration file that must be filled out prior to installation. Once vCD pre-requisites are ready,  You can generate a sample file using below command:

#cse sample > config.yaml  ( cse sample generates sample config yaml)

Run above command on above VM which we have prepared for our CSE server , This file is having five sections , which i am going to cover one by one.

AMQP Section:

  • During CSE Server installation, CSE will configure AMQP to ensure communication between vCD and the running CSE server. if vCD has already been configured then skip this section while running install command , if vCD has not been configured with AMQP configuration then enter information in this section which will automatically go and configure this for you in vCD. Configure this section as described below:

 

1 copy

vCD Section:

  • This section is self explanatory , you need to specify vCD related details (ensure API version is related to vCD version):

2.png

vCS Section:

  • In this section provide vCenter information like VC name and credential.

3.png

 Service Section:

  • The service section specifies the number of threads to run in the CSE server process.

4

Broker Section:

  • The broker section contains properties to define resources used by the CSE server including org and VDC as well as template definitions. The following Image summarise key parameters. More Details can be found here

5

  • Sample Config.yaml file can be downloaded from config.

CSE SERVER INSTALLATION:

  • Once your are ready with file run CSE install command to start the installation. ( as said earlier we need to create a VM on which CSE server must be installed by the vCloud Director System/Cloud Administrator.The CSE appliance must be reachable to vCenter , vCD and AMQP servers. i am installing on the VM which i have prepared in first section)
  • #cse install -c config.yaml –ssh-key=$HOME/.ssh/id_rsa.pub –ext config -amqp skip
  • I am skipping amqp configuration as “AMQP” is already configured in my vCD.

14.png

15

  • it failed due to some issue , so i have to rerun the command after fixing the issue and same can be done multiple times.

16

  • Once installation is completed , check the installation status using:
  • #cse check –config config.yaml –check-install

17

  • Now to validate that CSE has been registered in vCD Use “vcd-cli” command line, check that the extension has been registered in vCD:

181920

Running CSE Server as a Service:

  • create a file named “cse.sh”  inside directory /home/vmware with following content:
    • 7.png
  • create file name cse.service inside directory /etc/systemd/system with following content:
    • 6.png
  • Once installed you can start the CSE service daemon using #systemctl start cse . To enable, disable, and stop the CSE service, use CSE client.
    • 23.png

Setting the API Extension Timeout

  • The API extension timeout is the number of seconds that vCD waits for a response from the CSE server extension. The default value is 10 seconds, which may be too short for some environments. To change the time follow the steps :

    • On the vCloud Director cell run:

    • Go to Cd /opt/vmware/vcloud-director/bin and run below commands -l to list -v to Set.2122

Enable CSE

  • Login to vCD and enable the CSE using below commands…

8.png

This completes the installation of Container Server Extension and allow providers to offer Kubernetes as a Service to their customers. feel free to share your experience on this installation.

What is VMware Cloud Provider Pod

There are lots of partners looking for a solution which can automate the entire deployment of vCD based cloud once racking , stacking and cabling is done for their infrastructure that is where VMware Cloud Provider Pod helps…Basically Cloud Provider Pod automate the deployment of VMware-based clouds. A Cloud Provider Pod-deployed stack adheres to VMware Validated Design principles and is thoroughly tested for interoperability and performance. It is also tested for cloud-scale and is built to handle rigorous Cloud Provider workloads. It deploys technologies with core provider capabilities such as data center extension, cloud migration, and multi-tenancy and chargeback, and helps achieve the fastest path to VMware-based cloud services delivery. Cloud provider POD help cloud providers time to market and help in improving service delivery.

Features:

2          3     4

Cloud Provider Pod 1.1 : Supported Interoperable Version for Deployment

vSphere 6.7u1
vSAN 6.7u1
NSX 6.4.4
vCloud Director Extender 1.1.0.2
vRealize Orchestrator 7.5
vRealize Operations 7.0, including Multi-Tenant App 2.0
vRops – Cloud Pod Management Pack
vRealize Log Insight 4.7
vRealize Network Insight 4.0
Usage Meter 3.6.1

POD Designer walk through

The Cloud Provider Pod Designer offers Providers the choice to start with a VMware Validated Design (CONFIGURE YOUR CLOUD) or the Advanced Design which is custom designer based on your environment specific requirement not as per VMware validated design.

5.png

The main difference between the VMware Validated Design and the Advanced Configuration modes is that you can choose to use NFS, iSCSI or Fibre Channel as your storage options. The setup of BGP AS and other options is also not required, but can be done.VVD designer start with asking basic details about your cloud environment that you want to build , Click on “Configure Your Cloud” which will take you to below screen where you need to fill in “General Parameters”

6

Next will take you the screen where you need to choose optional packages that you want to add/exclude from your deployment.

7

Next will take you to Resource cluster selection , where you need to choose how many resource cluster your deployment will have and within that resource cluster how many host that you would have.

8

In the next screen , Enter your environment variable like DNS ,NTP  etc…

9

Enter your Management Cluster’s Networking and public facing ip addressing in “External/DMZ IP Assignment” and in MAC Addresses , you can add the MAC addresses for the hosts during the Cloud Provider Pod Designer workflow, or later during the deployment workflow. The number of available MAC addresses text boxes depends on how many hosts have been configured on the Sizing page

10.png

11.png

Enter your resource cluster details like VXLAN Segment etc…

13

Choose Hypervisor’s NIC allocation.

14

Enter License Keys now or post deployment also licenses can be assigned.

15.png

“Generate all Documentation Files”  –  This is very important and all the providers will like it , it basically automate the creation of design document and configuration work book of your environment , which was the biggest pain where Architects/consultants used to spend lots of time writing design document with visio’s etc.. this is all automatically get generated using CPod.

16.png

Once you click on “Generate Configuration” , it will generate your deployment bundle and documentation and email it to you then you can use “Cloud Provider Pod Deployer” to start the deployment. here is overall flow of the entire process

18.png

Cloud Provider Pod Deployer

Use Cloud provider deployer to deploy entire infrastructure on a click of a button. Detailed documentation and step-by-step instructions on how to use the Cloud Provider Pod Deployer to create a new environment are available in the Cloud Provider Operations guide. This guide is delivered by the Cloud Provider Pod Designer as part of generated documentation by an Email , which you registered at the start of designer.

Deployer Video is here for your reference – https://www.youtube.com/watch?v=5xOiToL2o94&feature=youtu.be&list=PLunwH0gjkUBi7Mu18nNXxUl6FgzpU3iyd

 

 

%d bloggers like this: