VMware Pivotal Container Service (PKS) provides a Kubernetes based container service for deploying and operating modern applications across private and public clouds. basically it is Managed kubernetes for multiple kubernetes cluster and aimed at Day 2 operations. K8S is designed with focus on high availability, auto-scaling and supports rolling upgrades.
PKS integrates with VMware NSX-T for advanced container networking, including micro-segmentation, ingress controller, load balancing, and security policy and also by using VMware Harbor, PKS secures container images through vulnerability scanning, image signing, and auditing.A PKS deployment consists of multiple VM instances classified into 2 different categories:
PKS Management Plane –
PKS management plane consist of below VMs:
PCF Ops Manager
Pivotal Operations Manager (Ops Manager) is a graphical interface for deploying and managing Pivotal BOSH, PKS Control Plane, and VMware Harbor application tiles. Ops Manager also provides a programmatic interface for performing lifecycle management of Ops Manager and application tiles.
VMware BOSH Director
Pivotal BOSH is an open-source tool for release engineering for the deployment and lifecycle management of large distributed systems. By using BOSH, developers can version, package, and deploy software in a consistent and reproducible manner.
BOSH is the first component, that’s installed by Ops Manager. BOSH is a primary PKS tile.BOSH was originally designed to deploy open source Cloud Foundry.Internally BOSH has below components:
- Director: This holds the role of core orchestration engine controls the provisioning of vms , required softwares and service life cycle events.
- Blobstore: The Blobstore stores the source forms of releases and the compiled images of releases. An operator uploads a release using the CLI, and the Director inserts the release into the Blobstore. When you deploy a release, BOSH orchestrates the compilation of packages and stores the result in the Blobstore.
- Postgres DB: Bosh director uses a postgres database to store information about the desired state of deployment including information about stemcells, releases and deployments. DB is internal to the Director VM.
Pivotal Container Service (PKS Control Plane)
PKS Control Plane is the self service API for on-demand deployment and Life cycle management of K8s clusters. API submit the request to BOSH which automates the creation , deletion and updates of kubernetes clusters.
VMwware harbor is an open-source, enterprise-class container registry service that stores and distributes container images in a private, on-premises registry. In addition to providing Role-Based Access Control (RBAC), Lightweight Directory Access Protocol (LDAP), and Active Directory (AD) support, Harbor provides container image vulnerability scanning, policy-based image replication, notary and auditing service.
PKS Data Plane
K8s is an open-source container orchestration framework. Containers package applications and their dependencies in container images. A container image is a distributable artifact that provides portability across multiple environments, streamlining the development and deployment of software. Kubernetes orchestrates these containers to manage and automate resource use, failure handling, availability, configuration, scalability, and desired state of the application.
Integration with NSX-T
VMware NSX-T helps simplify networking and security for Kubernetes by automating the implementation of network policies, network object creation, network isolation, and micro-segmentation. NSX-T also provides flexible network topology choices and end-to-end network visibility.
PKS integrates with VMware NSX-T for production-grade container networking and security. A new capability introduced in NSX-T 2.2 allows you to perform workload SSL termination using Load Balancing services. PKS can leverage this capability to provide better security and workload protection.
Major benefit of using NSX-T with PKS and K8s is automation that is dynamic provisioning and association of network objects for unified VM and pod networking. The automation includes the following:
- On-demand provisioning of routers and logical switches for each Kubernetes cluster
- Allocation of a unique IP address segment per logical switch
- Automatic creation of SNAT rules for external connectivity
- Dynamic assignment of IP addresses from an IPAM IP block for each pod
- On-demand creation of load balancers and associated virtual servers for HTTPS and HTTP
- Automatic creation of routers and logical switches per Kubernetes namespace, which can isolate environments for production, development, and test.
PKS Management Network
This network will be used to deploy PKS Management components. this could be a dvSwitch or NSX-T logical switch since in my Lab i will be using no NAT topologies with virtual switch. in my Lab i will be using dvs with network segment of 192.168.110.x/24.
Kubernetes Node Network
This Network will be used for kubernetes management nodes. it is allocated to master and worker nodes. these nodes embed Node Agent to monitor the liveness of the cluster.
Kubernetes Pod Network
This network is used when an application will be deployed on to a new kubernetes namespace. A /24 network is taken from IP Block and is allocated to a specific Kubernetes namespace allowing for network isolation and policies to be applied between name spaces. The NSX-T Container Plugin automatically creates the NSX-T logical switch and Tier-1 router for each name spaces.
Load Balancer and NAT Subnet
This network pool, also known as the Floating IP Pool, provides IP addresses for load balancing and NAT services which are required as a part of an application deployment in Kubernetes.
PKS deployment Network Topologies – Refer Here
PKS Deployment Planning
Before you install PKS on vSphere with NSX-T integration, you must prepare your vSphere and NSX-T environment and ensure vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other, ensure we have adequate resources.
PKS Management VM Sizing
When you size the vSphere resources, consider the compute and storage requirements for each PKS management component.
|VM Name||vCPU||Memory||Storage||No. of VMs|
|Ops Manager||1||8||160 GB||1|
|PKS Control VM||2||8||29 GB||1|
|Compilation VMs||4||4||10 GB||4|
|Client VM||1||2||8 GB||1|
|VMware Harbor||2||8||169 GB||1|
Compilation vms get created when an initial K8s cluster is deployed, software packages are compiled and four additional service VMs are automatically deployed as a process as a single task and these vms get deleted once compilation process completes. To manage and configure PKS, PKS and Kubernetes CLI command-line utilities are required , these utilities can be installed locally on a workstation called Client VM.
Plan your CIDR block
Before you install PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment as explained above. these are the CIDR blocks that we need to plan:
- PKS MANAGEMENT CIDR
- PKS LB CIDR
- Pods IP Block
- Nodes IP Block
Below are the CIDR blocks that you can’t use because:
The Docker daemon on the Kubernetes worker node uses the subnet in the following CIDR range:
If PKS is deployed with Harbor, Harbor uses the following CIDR ranges for its internal Docker bridges:
Each Kubernetes cluster uses the following subnet for Kubernetes services,Do not use the following IP block for the Nodes IP Block:
In this blog post series i will be deploying NO-NAT topology and will walk you through step by step process of PKS deployment with NSX-T integration.
Next post on this series is VMware PKS, NSX-T & Kubernetes Networking & Security explained , this will help you understand what happens behind the scene in networking and security stack when PKS and NSX-T deploys kubernetes and its networking stack.