Month: October 2016

VMware vCenter Appliance (VCSA) 6.5 Now Running on PhotonOS

From vSphere 5.5 the VCSA was running on SLES, while the latest version of VCSA was running on SLES 11 but Starting with vSphere 6.5, the vCenter Server Appliance is now running on VMware’s own Photon OS, which is a Minimal Linux Container Host, optimized for running on the VMware platform.

ver

This is a great move by VMware to help simplify support and maintenance of OS/Appliance lifecycle, and streamline updates and patches. VMware no longer has to rely on SUSE Linux when trying to fine-tune the applications and dependencies and release any updates.

VCSA6.5 is running on Postgress database. It also has a management UI at port https://<IP address of appliance> :5480 which will allow you to monitor the database utilization and size along with CPU and Memory.

Additional benefits  of using Photon OS:

  • The OS comes Pre-hardened.
  • More than 80% reduction in disk space for OS.
  • 3-4 x reduction in kernel boot time vs. general-purpose kernel , will help you to boot up vCenter quickly in case of failures.

2.jpg

 

Happy Diwali 🙂

Advertisements

What’s New in vSphere 6.5?

Guys , Today VMware announced vSphere 6.5, here is the new feature list and enhancements…

Features which really excites me are as below:

Scale Enhancements – New configuration maximums to support even the largest app environments.

VMware vCenter Server® Appliance – The single control center and core building block for vSphere , VMware wants going forward every should use vCenter appliance and lots of efforts have been made to make it possible , Now Appliance support…

  • Native High Availability
    • Now vCenter Appliance  has High Availability which is a native HA solution built right into the appliance. Using an Active/Passive/Witness architecture, vCenter is no longer a single point of failure and can provide a 5-minute RTO. This HA capability is available out of the box and has no dependency on shared storage, RDMs, or external databases.

  • Vmware update Manager is now part of appliance.
    • integration of VMware Update Manager into the vCenter Server Appliance , so more separate windows based deployment required , Zero setup; embedded DB and Leverages VCSA HA and backup
  • Improved Appliance Management
    • Now user interface shows Network and Database statistics, disk space, and health in addition to CPU and memory statistics which reduces the reliance on using a command line interface for simple monitoring and operational tasks.

  • Native Backup and Restore
    • Appliance only , Now support simplified backup and restore with a new native file-based backup solution to external storage using HTTP , FTP or SCP protocols  and Restore the vCenter Server configuration to a fresh appliance.

NOTE – These new features are only available in the vCenter Server Appliance.

vSphere Client (HTML5 Web Client)

  • Clean, modern UI built on VMware’s new Clarity UI standards, No browser plugins to install/manage , Integrated into vCenter Server for 6.5 and Fully supports Enhanced Linked Mode.

vSphere Integrated Containers (VIC)

  • VIC extends vSphere capabilities to run container workloads and sits on top of vCenter, and integrates with the rest of the Vmware stack like NSX and vSAN.
  • VIC contains three parts, VIC engine which delivers virtual container host that provides native docker endpoint, developer can continue to use the familiar docker commands, the fact these docker containers are running in a virtual infrastructure is transparent to them, except for the benefit that their infrastructure resources like compute, storage can be expanded much more easily.
  • The second component is registry which provides enterprise private registry capability to give IT control over their intellectual property. This is an optional component, you can of course use docker or other registry of your choices.
  • The third component is container management portal which provide GUI, API and CLI interfaces to provision, manage and monitor containers.

Simplified HA Admission Control

  • Simplified configuration workflow
  • Additional Restart Priorities added – Highest , High , medium , low and Lowest

Proactive HA 

  • Detect hardware conditions of host components by receiving alerts from hardware vendor monitoring solution
    • Dell Openmanage
    • HP Insight Manager
    • Cisco UCS Manager
  • Notifications of host impacted, its current state, error causes, severity and physical remediation steps and vMotion VMs from partially degraded hosts to healthy hosts. ( this is configurable)
  • i think this is really awesome if configured properly.

Network-Aware DRS

  • Adds network bandwidth considerations by calculating host network saturation (Tx & Rx of connected physical uplinks)

Currently i am going through detailed feature list and will be sharing more soon , till that time Happy learning 🙂

 

 

 

Docker Engine for Windows Server 2016 Now Available

 

Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows.the expansion of the Docker platform beyond Linux workloads to support Windows Server applications. The Commercially Supported Docker Engine (CS Docker Engine) is now available at no additional cost with every edition of Windows Server 2016.

Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may also be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.

Once Windows Server 2016 is running, log in, run Windows Update to ensure you have all the latest updates and install the Windows-native Docker Engine directly (that is, not using “Docker for Windows”). Run the following in an Administrative PowerShell prompt:

# Add the containers feature and restart
Install-WindowsFeature containers
Restart-Computer -Force

# Download, install and configure Docker Engine
Invoke-WebRequest "https://download.docker.com/components/engine/windows-server/
cs-1.12/docker.zip" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing

Expand-Archive -Path "$env:TEMP\docker.zip" -DestinationPath $env:ProgramFiles

# For quick use, does not require shell to be restarted.
$env:path += ";c:\program files\docker"

# For persistent use, will apply even after a reboot. 
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\
Docker", [EnvironmentVariableTarget]::Machine)

# Start a new PowerShell prompt before proceeding
dockerd --register-service
Start-Service docker


For More Details Click Here

Guys Let’s learn it before it becomes main stream in the industry 🙂

 

NFV, SDN and VMware NSX

NFV

After Compute Virtualization Network Functions Virtualization (NFV) is the next step, moving physical networking devices/equipment and running it in a VM. NFV separates network functions from routers, firewalls, load balances and other dedicated hardware devices and allows network services to be hosted on virtual machines. Virtual machines have a hypervisor, also called a virtual machine manager, which allows multiple operating systems to share a single hardware processor. When the hypervisor controls network functions those services that required dedicated hardware can be performed on standard x86 servers.

SDN

SDN makes the network programmable by separating the control plane (telling the network what goes where) from the data plane (sending packets to specific destinations). It relies on switches that can be programmed through an SDN controller using an industry standard control protocol, such as Open Flow.

SDN NFV
SDN separates control and data and centralizes control and programmability of the network. NFV transfers network functions from dedicated appliances to generic servers.
Network intelligence is logically centralized in SDN controller software that maintains a global view of the network, which appears to applications and policy engines as a single, logical switch. As virtual device, they require no extra space, power or cooling.
SDN operates in a campus, data center and/or cloud environment Physical devices often rapidly reach EOL or run out of capacity and need to be upgraded to large models
SDN software targets cloud orchestration and networking NFV targets the service provider network
  NFV software targets routers, firewalls, gateways, WAN, CDN, accelerators and SLA assurance

SDN and NFV Are Better Together

 These approaches are mutually beneficial, but are not dependent on one another. You do not need one to have the other. However, the reality is SDN makes NFV and NV more compelling and visa-versa. SDN contributes network automation that enables policy-based decisions to orchestrate which network traffic goes where, while NFV focuses on the services, and NV ensures the network’s capabilities align with the visualized environments they are supporting.

VMware NSX

VMware NSX is a hypervisor networking solution designed to manage, automate, and provide basic Layer 4-7 services to virtual machine traffic. NSX is capable of providing switching, routing, and basic load-balancer and firewall services to data moving between virtual machines from within the hypervisor. For non-virtual machine traffic (handled by more than 70% of data center servers), NSX requires traffic to be sent into the virtual environment. While NSX is often classified as an SDN solution, as per my understanding that is really not the case.

SDN is defined as providing the ability to manage the forwarding of frames/packets and apply policy; to perform this at scale in a dynamic fashion; and to be programmed. This means that an SDN solution must be able to forward frames. Because NSX has no hardware switching components, it is not capable of moving frames or packets between hosts, or between virtual machines and other physical resources. In my view, this places VMware NSX into the Network Functions Virtualization (NFV) category. NSX virtualizes switching and routing functions, with basic load-balancer and firewall functions.

UseCases:

Security:

NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.

Automation:

VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the data center using NSX.

NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.

Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack.

Application Continuity:

NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center in conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.

NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.

Features:

Switching:

Logical switching enables extension of a L2 segment / IP subnet anywhere in the fabric independent of the physical network design.

Routing:

Routing between IP subnets can be done in the logical space without traffic leaving the hypervisor; routing is performed directly in the hypervisor kernel with minimal CPU / memory overhead. This distributed logical routing (DLR) provides an optimal data path for traffic within the virtual infrastructure means east-west communication. Additionally, the NSX Edge provides an ideal centralized point for seamless integration with the physical network infrastructure to handle communication with the external network (i.e., north-south communication) with ECMP-based routing.

Connectivity to physical networks:

L2 and L3 gateway functions are supported within NSX to provide communication between workloads deployed in logical and physical spaces.

Edge Firewall:

Edge firewall services are part of the NSX Edge Services Gateway (ESG). The Edge firewall provides essential perimeter firewall protection which can be used in addition to a physical perimeter firewall. The ESG-based firewall is useful in developing PCI zones, multi-tenant environments, or dev-ops style connectivity without forcing the inter-tenant or inter-zone traffic onto the physical network.

VPN:

L2 VPN, IPSEC VPN, and SSL VPN services to enable L2 and L3 VPN services. The VPN services provide critical use-case of interconnecting remote datacenters and users access.

Logical Load-balancing:

L4-L7 load balancing with support for SSL termination. The load-balancer comes in two different form factors supporting inline as well as proxy mode configurations. The load-balancer provides critical use case in virtualized environment, which enables devops style functionalities supporting variety of workload in topological independent manner.

DHCP & NAT Services:

Support for DHCP servers and DHCP forwarding mechanisms; NAT services. NSX also provides an extensible platform that can be used for deployment and configuration of 3rd party vendor services. Examples include virtual form factor load balancers (e.g., F5 BIG-IP LTM) and network monitoring appliances (e.g.,Gigamon – GigaVUE-VM).
Integration of these services is simple with existing physical appliances such as physical load balancers and IPAM/DHCP server solutions.

Distributed Firewall:

Security enforcement is done directly at the kernel and vNIC level. This enables highly scalable firewall rule enforcement by avoiding bottlenecks on physical appliances. The firewall is distributed in kernel, minimizing CPU overhead while enabling line-rate performance.

NSX also provides an extensible framework, allowing security vendors to providean umbrella of security services. Popular offerings include anti-virus/antimalware/anti-bot solutions, L7 firewalling, IPS/IDS (host and network based)services, file integrity monitoring, and vulnerability management of guest VMs.

I hope this will help you understanding what NSX does and its use case. Comments are welcome to make this post more accurate and interesting…:)

Learn NSX – Part-10 (Create Logical Switches)

A logical switch reproduces Layer 2 and Layer 3 functionality (unicast, multicast, or broadcast) in a virtual environment completely decoupled from underlying hardware.

  • Select Logical Switches under the Networking & Security section.
  • Click the green + icon to add a new logical switch.
  • 1.gif
  • In the New Logical Switch dialog box:
  • 2
    • Enter the logical switch name in the Name text box.
    • Enter a description in the Description text box. For ease of use, add the subnet that will be used within this logical switch.
    • In the Transport Zone selection box, click Change to choose the appropriate transport zone.
    • Leave the Replication Mode option, or select a new one if you want to overlap the transport zone replication mode. By default, the logical switch inherits the control plane mode from the transport zone.
    • Select the Enable IP Discovery check box to enable ARP suppression. This minimizes ARP flooding within individual VXLAN segments.
    • Select the Enable MAC Learning check box to avoid possible traffic loss during VMware vSphere vMotion.
    • Click OK.

Happy Learning 🙂

In My Previous post , i have demonstrated that how can we create LS using API , for details please check this Post

VMware Validated Design (SDDC) Available for download

The VMware Validated Design provides a set of prescriptive documents that explain how to plan, deploy, and configure a deployment of a Software-Defined Data Center (SDDC). This design supports a number of use cases, and is designed to minimize difficulty in integration, expansion, and operation, as well as future updates and upgrades.

In my opinion this is really going to help architects and administrators to align the design and deployment based on VMware recommendations that will help customers to more robust , highly available and supported environment.

Download 

Learn NSX – Part-09 (Transport Zone)

The Segment ID Pool specifies a range of VXLAN Networks Identifiers (VNIs) to use when building logical network segments. This determines the maximum number of logical switches that can be created in your infrastructure.

  • You must specify a segment ID pool for each NSX Manager to isolate your network traffic.

A transport zone defines the span of a logical switch by delineating the width of the VXLAN/VTEP replication scope and control plane. It can span one or more vSphere clusters. You can have one or more transport zones based on your requirements.

  • Log in to the vSphere Web Client and click Networking & Security.
  • Select Installation under the Networking & Security section and select the Logical Network Preparation tab.
  • Select the Segment ID menu and click Edit.
    • 1
  • Enter the range of numbers to be used for VNIs in the Segment ID pool text box and click OK.
    • 2
  • Select the Transport Zones menu and click the green + icon to add a new transport zone.
    • 3
  • In the New Transport Zone dialog box:
    • Enter the name of the transport zone in the Name text box.
    • (Optional) Add a description in the Description text box.
    • Depending on whether you have NSX Controller nodes in your environment or want to use multicast addresses, select the appropriate Replication mode (also known as the control plane mode).
    • Select the check boxes for each cluster to be added to the transport zone.
    • Click OK.
    • 4

We have now completed pre-requisite for virtual network deployment , in the next few posts i  will help you to deploy logical switching , routing etc…

Happy Learning 🙂

Containers in Windows Server 2016 TP5

I am just starting to get my head around the concept of Containers and decided to take Windows Server 2016 Technical Preview 5 for a test, which includes Docker Containers as a feature. It’s potentially worth being more explicit here (for anyone not aware) that this isn’t Microsoft’s version of Containers, this is docker container.

There are two types of containers supported by MS.

Windows Containers

  • Multiple container instances can run concurrently on a host, with isolation provided through namespace, resource control, and process isolation technologies. Windows Server containers share the same kernel with the host, as well as each other.

Hyper-V Containers

  • Multiple container instances can run concurrently on a host; however, each container runs inside of a special virtual machine. This provides kernel level isolation between each Hyper-V container and the container host.

Lets Deploy our first windows container host:

First Install Container Feature

The container feature needs to be enabled before working with Windows containers. To do so run the following command in an elevated PowerShell session.

Install-WindowsFeature containers

When the feature installation has completed, reboot the computer.

Restart-Computer -Force

Next Install Docker

Docker is required in order to work with Windows containers. Docker consists of the Docker Engine, and the Docker client. For this exercise, both will be installed.

New-Item -Type Directory -Path ‘C:\Program Files\docker\’

Invoke-WebRequest https://aka.ms/tp5/b/dockerd -OutFile $env:ProgramFiles\docker\dockerd.exe

Invoke-WebRequest https://aka.ms/tp5/b/docker -OutFile $env:ProgramFiles\docker\docker.exe

[Environment]::SetEnvironmentVariable(“Path”, $env:Path + “;C:\Program Files\Docker”, [EnvironmentVariableTarget]::Machine)

The commands above create a new Docker folder in Program Files and download the docker deamon and client binaires in this directory. The last one adds the directory in the Path environment variable.

To install Docker as a Windows service, run the following.

dockerd --register-service

Once installed, the service can be started.

Start-Service Docker

Install Base Container Images

Before working with Windows Containers, a base image needs to be installed. Base images are available with either Windows Server Core or Nano Server as the underlying operating system.

To install the Windows Server Core base image run the following:

Install-PackageProvider ContainerImage -Force
Install-ContainerImage -Name WindowsServerCore

 

Restart the Docker service:

Restart-Service

1