Learn VMwareCloudFoundation – Part01

No comments

Starting a series of posts on Learning VMware Cloud Foundation , here is the first in the series…

Basically, it brings together vSphere, VSAN, and NSX into the next generation hyper-converged platform. product that brings all of these components together in an easy to deploy and consume manner is a new product called the SDDC Manager. SDDC Manager allows you to consume the entire stack as a unified single entity.

VMware Cloud Foundation can also be consumed both on premise on qualified hardware, and as a service from cloud partners,like IBM Soft Layer . Customer can now build a true hybrid cloud, linking the private and public cloud through this unified and common foundation across both environments.

1.JPG

Why to Choose VCF

Standard design – virtualization components like vSphere ESXi , vSAN , NSX   and management components like vCenter , NSX Manager , Controllers  are automatically deployed and configured according to a validated datacenter architecture based on  best practices. This will eliminate enterprises lengthy planning cycles for vSphere , NSX , vSAN , vROps , LI  design and deployment.

Fully Integrated stack – with VCF the VMware virtualization components like ESXi, vSAN, NSX and the management software like vCenter, LogInsight, vROps, SDDC Manager are combined into a single cloud infrastructure platform, that basically eliminating the need to rely on complex interop matrixes.

Automates Hardware and Software Bring-Up – Cloud Foundation automates the installation of the entire VMware software stack. Once the rack is installed and powered on and the networking is in place, SDDC Manager leverages its knowledge of the hardware
bill of materials and user-provided environmental information (e.g. DNS, IP address pool, etc.) to initialize the rack. Time savings varies by customer, but software installation time is estimated to be reduced from several weeks to as little as two hours due to the automation of certain previously manual functions. These include provisioning workloads, including automated provisioning of networks, allocation of resources based on service needs, and provisioning of end points. When the process completes, the customer has a
virtual infrastructure ready to start deploying vSphere clusters and provisioning workloads.

Lifecycle management automation – Data center upgrades and patch management are typically manual, repetitive tasks that are prone to configuration and implementation errors. Validation testing of software and hardware firmware to ensure interoperability among components when one component is patched or upgraded requires extensive quality assurance testing in staging environments.

SDDC Manger provides built-in capabilities to automate the bring up, configuration, provisioning and patching/upgrades of the cloud infrastructure.Lifecycle management in SDDC Manager can be applied to the entire infrastructure or to specific workload domains and is designed to be non-disruptive to tenant virtual machines (VMs).

2.jpg

Integrates Management of Physical and Virtual Infrastructure – SDDC Manager understands the physical and logical topology of the software defined data center and the underlying components’ relation to each other, and efficiently monitors the infrastructure to detect potential risks, degradations and failures. SDDC Manager provides stateful alert management to prevent notification spam on problem detection. Each notification includes a clear description of the problem and provides remediation actions needed to restore service.

Components of VCF –

3

Physical Architecture of VCF

A VCF instance starts with a single rack, and scales up to 8 racks, each containing up to 32 hosts per rack. This gives us a total of 256 hosts per VCF instance. Each rack contains two top-of-rack (ToR) switches and a management switch, and racks 2-8 are connected to spine switches in the first or second rack. A VMware Cloud Foundation private cloud deployment is comprised of between one to eight physical racks. Each rack contains between 4 to 32 vSAN Ready Nodes, one management switch, and two Top-of-Rack (ToR) switches.In multi-rack configurations, a pair of redundant spine switches are added to provide for inter-rack connectivity.

4.jpg

Spine Switches

The Cloud Foundation system contains two spine switches. These switches extend the network fabric of the top of rack (ToR) switches between racks and are used for inter-rack connectivity only. The hardware vendor connects the available uplink ports of the ToR switches to the spine switches.
Spine switches are required only in multi-rack installations of Cloud Foundation and are placed in the second rack.

Management Switch

The management switch provides Out-Of-Band (OOB) connectivity to the baseboard management controller (BMC) on each server.The management network fabric does not carry vSphere management, vSAN, or vMotion traffic. That traffic resides on the network fabric created by the TOR and spine switches. As a result the management switch is a non-redundant component in the physical rack. If this switch goes down, some functionality such as monitoring may not be available until it comes back up. Workloads will continue to run, but the infrastructure associated with them cannot be modified or controlled.

Open Hardware Management System (OHMS) – On each management switch OHMS is running which is recently made open source. This is a Java runtime software agent that is invoked to manage physical hardware across the racks. SDDC Manager will communicate with OHMS to configure switches and hosts (Cisco API, CIMC, Dell, etc.). VMware has developed plugins for Arista and Cisco, but now this is open-source vendors can write their own plugins for other hardware platforms.

Top of Rack Switches

A physical rack contains two top of rack (ToR) switches, each of which has 48 10GE ports and at least 4*40GEuplink ports. The ToR and spine switches carry all network traffic from the servers including VM network,VM management, vSAN, and vMotion traffic. On rack 1 in a multi-rack Cloud Foundation, the ToRs also carry traffic to the enterprise network via two of the uplink ports. The ToR switches provide higher bandwidth as well as redundancy for continued operation in case one of the ToR switches goes down.If the installation has spine switches, two uplink ports from each ToR switch on each rack are connected to each spine switch.

Servers

A physical rack must contain a minimum of four dual-socket 1U. You can incrementally add servers to the rack up to a maximum of 32 servers.All servers within a rack must be of the same model and type. The disk size and storage configuration must be identical as well. Memory and CPU (e.g. per CPU core count) between servers can vary.

Management Domain

SDDC Manager configures the first four servers in each physical rack into an entity called the management domain. After you deploy Cloud Foundation, you can expand the management domain.The management domain manages the hosts in that rack. All disk drives are claimed by vSAN.The management domain contains the following:
vCenter Server Appliance(including both vCenter Server and Platform Services Controller as separate VMs) managing the vSphere cluster with HA and DRS enabled and the following VMs:
NSX Manager
vRealize Operations
vRealize Log Insight
SDDC Manager

5

Physical Network Connectivity

All hosts in a physical rack are connected to both the two ToR switches with 10Gb links. On each host, NIC port 1 is connected to ToR switch 1 and NIC port 2 is connected to ToR switch 2 with Link Aggregation (LAG).
The BMC on each host is connected to the management switch over a 1G connection. This connection is used for OOB management. Both ToR switches are further connected to a pair of spine switches in a dual-LAG configuration using 40 G links. The spine switches are an aggregation layer for connecting multiple racks.

Physical Storage Connectivity

The primary source of storage for Cloud Foundation is vSAN. All disks are claimed by vSAN for storage.
The amount of available physical storage in workload domains depends on the number of physical hosts.Storage traffic is carried over the 10Gbps links between the hosts and ToR switches. All vSAN member hosts communicate over this 10Gbps network.

vSphere Network I/O Control (NIOC) can be enabled to allow network resource management to use network resource pools to prioritize network traffic by type.

This covers Hardware Architecture of the VMware Cloud Foundation , Next i will be covering software components of VCF. till the time Happy Learning 🙂

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s