Deploy VMware PKS – Part3

In  continuation to  my PKS installation, we are now going to install and configure the PKS Control Plane which provides a frontend API that will be used by Cloud Operators and Platform Operators to very easily interact with PKS for provisioning and managing (create, delete, list, scale up/down) Kubernetes Clusters.

Once a Kubernetes cluster has been successfully provisioned through PKS by Cloud Operations , the operators will need to  provide  the external hostname of the K8S Cluster and the Kubectl configuration file to their developers and then developers can  start consuming this newly provisioned K8s clusters and deploying applications without knowing simplicity/complexity of PKS/NSX-T.

Previous Posts of this series for your reference is here:

Download PKS

First of all download PKS from Pivotal Network , file will have extension .pivotal.

1

Installation Procedure

To import the PKS Tile, go to the home page of Ops Manager and click “Import a Product” and select the PKS package to begin the import process in to ops manager , it takes some time since this is a 4+GB appliance.

2.png

Once the PKS Tile has been successfully imported, go ahead and click on the “plus” sign to add the PKS Tile which will make it available for us to start configuring. After that, Click the orange Pivotal Container Service tile to start the configuration process.

12

Assign AZ and Networks

  • Here we will place the PKS API vm in the Management AZ and on the PKS Management Network that we have created on dvs in previous posts.
  • Choose Network which PKS API VM will use to connect to Network , in our case it is management network.
  • First time installation of PKS does not apply “Service Network” but we need to choose a network , for this installation i have created a NSX-T LS called “k8s” for Service network and i can use this in future, you can also create or specify “pks-mgmt” as this does not apply on new installation.
  • 3

Configure PKS API

  • Generate a wild card certificate for PKS API by selecting Generate RSA Certificate and create a DNS record.
  • Worker VM Max in Flight:  This makes sure how many instances of a component (non-canary worker) can start simultaneously when a cluster is created or resized. The variable defaults to 1 , which means that only one component starts at a time.
  • 4

Create Plans

Basically a plan defines a set of resource types used for deploying clusters. You can configure up to three plans in GUI. You must configure Plan 1.

  • Create multiple plans based on your needs like you can have master either 1 or 3.
  • you can choose to deploy number of worker VMs for each cluster and as per documentation worker nodes upto 200 has been tested but this number can go beyond 200 but sizing needs to be planned based on the other factors (like application and its requirement etc)
  • Availability Zone – Select one or more AZs for the Kubernetes clusters deployed by PKS for Master and same setting you need to configure for worker nodes and if you choose multiple AZ , then equal number of worker node will get deployed across AZs
  • Errand VM Type – select the size of the VM that contains the errand. The smallest instance may be sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.
  • To allow users to create pods with privileged containers, select the Enable Privileged Containers – Use with caution because privileged containers is a container running as privileged essentially disables the security mechanisms provided by Docker and allows code to run on the underlying system.
  • Disable DenyEscalatingExec – This will disable Admission Control.
    • 56

Create Plan 2 and Plan3 or just choose Inactive and create them later but remember PKS does not support changing the number of master/etcd nodes for plans with existing deployed clusters.

Configure Kubernetes Cloud Provider (IAAS Provider)

Here you will configure your IAAS where all these VMs will get deployed and in my case this is vSphere based cloud but now PKS supports forAWS, GCP and Azure.

  • Enter vCenter Details like Name , Credentials , data store names etc..

8

Configure PKS Logging

  • Logging is optional and can be configured with vRealize Log Insight , for my Lab i am leaving it default.
  • To enable clusters to drain app logs to sinks using SYSLOG://, select the Enable Sink Resources checkbox.
  • 9

Configure Networking for Kubernetes Clusters

NSX Manager Super User Principal Identity Certificate – As per NSX-T documentation , a principal can be an NSX-T component or a third-party application such open stack or PKS. With a principal identity, a principal can use the identity name to create an object and ensure that only an entity with the same identity name can modify or delete the object (except Enterprise Admin). A principal identity can only be created or deleted using the NSX-T API. However, you can view principal identities through the NSX Manager UI.

We will have to create a user id and that user, id PKS API uses the NSX Manager superuser principal identity to communicate with NSX-T to create, delete, and modify networking resources for Kubernetes cluster nodes. Follow the steps here to create it.

  • Choose NSX-T as  Networking Interface
  • Specify NSX Manager hostname and generate the certificate as per above step.
  • 10
  • Pods IP Block ID – Here enter the UUID of the IP block to be used for Kubernetes pods. PKS allocates IP addresses for the pods when they are created in Kubernetes. every time a namespace is created in Kubernetes, a subnet from this IP block is allocated.
  • Nodes IP Block ID – Here enter the UUID of the IP block to be used for Kubernetes nodes. PKS allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks.
  • 11.png
  • T0 Router ID – Here enter the  T0 router UUID.
  • 12.png
  • Floating IP Pool ID – Here enter the ID that you created for load balancer VIPs. PKS uses these floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
  • 13.png
  • Node DNS – Specify Node DNS Server Name , ensure Nodes are reachable to DNS servers.
  • vSphere Cluster Names – Here enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters. The NSX-T pre-check errand uses this field to verify that the hosts from the specified clusters are available in NSX-T
  • HTTP/HTTPS Proxy – Optional
  • 14.png

Configure User Account and Authentication (UAA)

Before users can log in and use the PKS CLI, you must configure PKS API access with UAA. You use the UAA Command Line Interface (UAAC) to target the UAA server and request an access token for the UAA admin user. If your request is successful, the UAA server returns the access token. The UAA admin access token authorizes you to make requests to the PKS API using the PKS CLI and grant cluster access to new or existing users.

  • Leaving setting default with some timer changes
  • 15

Monitoring

You can monitor kubernetes cluster and pods using VMware Wavefront , which i will be covering in a separate post.

  • For now leave it default.

Usage Data

VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry)  program.

  • choose based in your preference.

Errands

Errands are scripts that run at designated points during an installation.

  • Since we are running PKS with NSX-T , we must need to verify our NSX-T configuration.
  • 16.png

Resource Config for PKS VM

Edit resources used by the Pivotal Container Service job and if there are timeouts while accessing PKS API VM , use high resource VM Type , for this LAB i am going with Default.

  • Leave it default.
  • 17.png

Missing Stemcell

A stemcell is a versioned Operating System image wrapped with IaaS specific packaging.A typical stemcell contains a bare minimum OS skeleton with a few common utilities pre-installed, a BOSH Agent, and a few configuration files to securely configure the OS by default. For example: with vSphere, the official stemcell for Ubuntu Trusty is an approximately 500MB VMDK file.

Click on missing stemcell link which will take you to StemCell Library. Here you can see PKS requires stemcell 170.15 , since i have already downloaded thats the reason it is showing 170.25 in the deployed section but in new installation cases it will show none  deployed. Click IMPORT STEMCELL and choose a stemcell which can be downloaded from Here to import.

18.png

Apply Changes

Return to Ops Manager installation Dashboard and click on “Review Pending Changes” and finally “Apply Changes” , this will go ahead and deploy PKS API VM at your IAAS location which you have chosen while configuring PKS tile.

14.png

and if the configuration of the tile is correct , around after 30 minute , you will see a successful message that deployment has been completed , which gives very nice feeling that your hard work  and dedication resulting success (for me it failed couple of time because of storage/network and resource issues).

To identify which VM has been deployed , you can check custom attributes or go back to  the Installation Dashboard, click the PKS tile then go to the Status tab. Here we can see the IP address of our PKS API , also notice CID which is VM name in vCenter inventory. also you can see the health of the PKS VM.

1920

This completes the PKS VM deployment procedure. in the next post we will deploy kubernetes Cluster.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s