Since vCloud Director 9.7 has dropped support for Postgres SQL9.5 , so i had to upgrade my postgres to 10 , then i have updated my vCloud Director to versions 9.7 , i followed below steps to upgrade the DB , basically at High level steps are as below:
- You need to backup the existing database and data directory.
- Uninstall old version of Postgres SQL.
- Install Postgres10
- Restore Backup
- Create database backup using:
- su – postgres
- pg_dumpall > /tmp/pg9dbbackup
- Check and Stop the service using
- #service postgresql-9.5 stop
- Move current data file as .old to /tmp directory using below command.
- #mv /var/lib/pgsql/9.5/data/ /tmp/data.old
- Uninstall 9.5 version of Postgres SQL using :
- Install PostgreSQL v10:
- Initialise the database
- service postgresql-10 initdb
- as suggested by my friend miguel if above step is not working then use this (/usr/pgsql-10/bin/postgresql-10-setup initdb)
- Copy the pg_hba.conf and postgresql.conf from old backed up directory to new directory , this will save some time or you can go ahead and edit existing files with required settings.
- cp /data.old/pg_hba.conf /var/lib/pgsql/10/data/
- cp /data.old/postgresql.conf /var/lib/pgsql/10/data/
- service postgresql-10 start
- Restore backup using below commands:
- su – postgres
- psql -d postgres -f /tmp/pg9dbbackup
you can run the reconfigure-database command and that’s it. (change your environment variable accordingly)
This will complete the database upgrade and database migration procedure.
In this part I will begin PKS installation by deploying Pivotal Ops Manager which basically provides a management interface (UI/API) for Platform Operators to manage the complete lifecycle of both BOSH and PKS starting from install then going to patch and upgrade.
To refer other posts of this series are here:
In addition, you can also deploy new application services using Ops Manager Tiles like adding an Enterprise-class Container Registry like VMware Harbor which can then be configured to work with PKS.
- Once Downloaded , Log into vCenter using the vSphere Web Client or HTML5 Client to deploy the Ops Manager OVA.
- Choose your Management cluster , appropriate network and other OVA deployment options , i am not going to cover OVA deployment procedure here. Only at customize template , enter below details:
- Admin Password: A default password for the user “ubuntu”.
- If you do not enter a password, Ops Manager will not boot up.
- Custom hostname: The hostname for the Ops Manager VM, in My example opsmgr.corp.local.
- DNS: One or more DNS servers for the Ops Manager VM.
- Default Gateway: The default gateway for Ops Manager.
- IP Address: The IP address of the Ops Manager network interface.
- NTP Server: The IP address of one or more NTP servers for Ops Manager.
- Netmask: The network mask for Ops Manager.
- Create a DNS entry for the IP address that you used for Ops Manager ,which we will use in next steps. use this DNS NAME/IP address and browse on a browser , which will take you to Authentication System for initial authentication setup and for our setup, i will use “Internal Authentication” for this Lab. Click on “Internal Authentication”
- Next, you will be prompted to create a new admin user which we will use to manage BOSH. Once you have successfully created the user, go ahead and login with the new user account
- Once you are logged into Ops Manager, you can see that the BOSH Tile is already there but is showing as un-configured (orange colour denotes un-configured) which means BOSH has not yet been deployed yet. Go ahead and click on the tile to begin the configuration to deploy BOSH.
Before starting Bosh Tile configuration , we need to prepare NSX Manager, listed below procedure for:
Generating and Registering the NSX Manager Certificate for PKS
The NSX Manager CA certificate is used to authenticate PKS with NSX Manager. You create an IP-based, self-signed certificate and register it with the NSX Manager.By default, the NSX Manager includes a self-signed API certificate with its hostname as the subject and issuer. PKS Ops Manager requires strict certificate validation and expects the subject and issuer of the self-signed certificate to be either the IP address or fully qualified domain name (FQDN) of the NSX Manager. That’s the reason, we need to regenerate the self-signed certificate using the FQDN of the NSX Manager in the subject and issuer field and then register the certificate with the NSX Manager using the NSX API.
- Create a file for the certificate request parameters named “nsx-cert.cnf” in a linux VM where openssl tool is installed.
- Write below content in to the file which we create in above step.
- Export the NSX_MANAGER_IP_ADDRESS and NSX_MANAGER_COMMONNAME environment variables using the IP address of your NSX Manager and the FQDN of the NSX Manager host.
- Using openssl tool generate the certificate by running below command:
- Verify the certificate by running command as below:
- ~$ openssl x509 -in nsx.crt -text -noout and ensure SAN has DNS name and IP addresses.
- Import this Certificate in NSX Manager , go to System -> Trust -> Certificates and click on Import -> Import Certificate
- Ensure that Certificate looks like this in your NSX Manager.
- Next is to Register the certificate with NSX Manager using below procedure , first the ID of the certificate from gui.
- Run the below command (in to your API client ) to register the certificate using below command replace “CERTIFICATE-ID” with your certificate ID.
Now let’s configure BOSH tile , which will deploy BOSH based on our input parameters.
Configure BOSH Tile to Deploy BOSH Director
Click on the tile. It will open the tile’s setting tab with the vCenter Config parameters page.
- vCenter Config
- Name: a unique meaning full name
- vCenter Host: The hostname of the vCenter.
- vCenter Username: Username for above VC with create and delete privileges for virtual machines (VMs) and folders.
- vCenter Password: the password of above VC.
- Datacenter Name: Exact name of data center object in vCenter
- Virtual Disk Type: Select “Thin” or “Thick”
- Ephemeral Datastore Names: The names of the data stores that store ephemeral VM disks deployed by Ops Manager , you can specify many data stores by using comma.
- Persistent Datastore Names (comma delimited): The names of the datastores that store persistent VM disks deployed by Ops Manager.
- To deploy BOSH as well as the PKS Control Plane VMs, Ops Manager will go ahead and upload a Stemcell VM ( A VM Template that PKS) and it will clone from that image for both PKS Management VMs as well as base K8S VMs.
- NSX-T Config
- Choose NSX Networking and Select NSX-T
- NSX Address: IP/DNS Name for NSX-T Manager.
- NSX Username and NSX Password: NSX-T credential
- NSX CA Cert: Open the NSX CA Cert that you generated in above section and copy/paste its content to this field.
- Other Config
- VM Folder: The vSphere datacenter folder where Ops Manager places VMs.
- Template Folder: The vSphere folder where Ops Manager places stemcells(templates).
- Disk path Folder: The vSphere datastore folder where Ops Manager creates attached disk images. You must not nest this folder.
- And Click on “Save”.
- Director Config
- For Director config , i have put in few details like:
- NTP Server Details
- Enable VM Resurrector Plugin
- Enable Post Deploy Scripts
- Recreate all VMs (optional)
- Availability Zone
- Availability zones are defined at a vSphere Cluster level. These AZs will then be used by BOSH director to determine where to deploy the PKS Management VMs as well as the Kubernetes VMs.Multiple Availability Zones allow you to provide high-availability across data centers. for this demonstration i have created two AZs, one for Management and one for my Compute.
- Create Network
- Since i am using dvs for my PKS management component , we need to specify those details in to this segment and make sure you select the Management AZ which is the vSphere Cluster where BOSH and PKS Control Plane VM will be deployed.
- Assign AZs and Networks
- In this section, Define the AZ and networking placement settings for the PKS Management VM.Singleton Availability Zone – The Ops Manager Director installs in this Availability Zone.
- Security & Syslog
- This section i am leaving default , if required for your deployment , pls refer documentation.
- Resource Config
- As per in part 1 sizing , the BOSH director vm by default allocates 2 vCPUs, 8GB memory, 64GB disk and also has a persistent disk of 50GB and Each of the four Compilation VMs uses 4 vCPUs, 4GB memory, 16GB disk each. For my lab deployment i have changed it to suite to my lab resources.Bosh director needs minimum of 8 GB Memory to run , so choose options accordingly.
- Review Pending Changes and Apply Changes
- With all the input completed, return to the Installation Dashboard view by clicking Installation Dashboard at the top of the window. The BOSH Director tile will now have a green bar indicating all the required parameters have been entered. Next we click REVIEW PENDING CHANGES and Apply Changes
- Monitor Installation and Finish
- If all the inputs are right then you will see that your installation is successful
After you login to your vCenter , you will see a new powered on VM in your inventory that starts with vm-UUID which is the BOSH VM. Ops Manager uses vSphere Custom Attributes to add additional metadata fields to identify the various VMs it can deploy, you can check what type of VM this is by simply looking at the deployment, instance_group or job property. In this case, we can see its been noted as p-bosh.
and this completes Ops Manager and BOSH deployment , next post we will install PKS tile.
In Continuation of my previous blog post here , where i have explained PKS component and sizing details , in this post i will be covering PKS component deployment.
Previous Post in this Series:
Getting Started with VMware PKS & NSX-T
My PKS deployment topology is look like below:
- PKS Deployment Topology – PKS management stack running out of NSX-T
- PKS VMs (Ops Manager, BOSH, PKS Control Plane, Harbor) are deployed to a VDS backed portgroup
- Connectivity between PKS VMs, K8S Cluster Management and T0 Router is through a physical router
- NAT is only configured on T0 to provide POD networks access to associated K8S Cluster Namespace
- Create a IP Pool
- Create a new IP Pool which will be used to allocate Virtual IPs for the exposed K8S Services The network also provides IP addresses for Kubernetes API access. Go to Inventory->Groups->IP Pool and enter the following configuration:
- Name: PKS-FLOATING-POOL
- IP Range: 172.26.0.100 – 172.26.255.254
- CIDR: 172.26.0.0/16
- Create POD-IP-BLOCK
- We need to create a new POD IP Block which will by used by PKS on-demand to create smaller /24 networks and assigned those to each K8S namespace. This IP block should be sized sufficiently to ensure that you do not run out of addresses. To create POD-IP-BLOCK , go to NETWORKING->IPAM and enter the following:
- Create NODEs-IP-BLOCK
- We need to create new NODEs IP Block which will be used by PKS to assign IP address to Kubernetes master and worker nodes.Each Kubernetes cluster owns the /24 subnet , so to deploy multiple Kubernetes clusters, plan for larger than /24 subnet. (recommendation is /16)
Prepare Client VM
and copy both the PKS (pks-linux-amd64-1.3.0-build.126 or latest) and Kubectl (kubectl-linux-amd64-1.12.4 or latest) CLI to VM.
- Now SSH to the Ubuntu VM and run the following commands to make binaries executable and renaming/relocating them to /usr/local/bin directory:
- chmod +x pks-linux-amd64-1.3.0-build.126
- chmod +x kubectl-linux-amd64-1.12.4
- mv pks-linux-amd64-1.3.0-build.126 /usr/local/bin/pks
- mv kubectl-linux-amd64-1.12.4 /usr/local/bin/kubectl
- Check version using – pks -v and kubectl version
- Next is to install Cloud Foundry UAAC , run this command
- apt -y install ruby ruby-dev gcc build-essential g++
- gem install cf-uaac
- Check version using – uaac -v
- Next is to install
This completes this part , in the next part we will start deploying PKS management VMs and their configuration.