Storage Type Comparison

This is for my beginner friends in vSphere…

1

vSphere Features Supported by Storage Type:

2

Happy learning 🙂

 

NSX6.3 – Finally NSX supports vSphere 6.5

VMware released both NSX for vSphere 6.3.o as well as vCenter 6.5a and ESXi 6.5a , we must ensure that before upgrade your NSX environment to 6.3 you would need to upgrade your vCenter and ESXi hosts to 6.5a as describe in the KB 2148841.

This version has introduced a lot of great new features and enhancements with just a few standouts listed below which appealed to me :

Controller Disconnected Operation (CDO) mode: A new feature called Controller Disconnected Operation (CDO) mode has been introduced. This mode ensures that data plane connectivity is unaffected when hosts lose connectivity with the controller.(i have seen few customers were affected due to this).

Controller Disconnected Operation (CDO) mode ensures that the data plane connectivity is unaffected when host lose connectivity with the controller. If you find issues with controller, you can enable CDO mode to avoid temporary connectivity issues with the controller.

You can enable CDO mode for each host cluster through transport zone. CDO mode is disabled by default.

When CDO mode is enabled, NSX Manager creates a special CDO logical switch, one for every transport zone. VXLAN Network Identifier (VNI) of the special CDO logical switch is unique from all other logical switches. When CDO mode is enabled, one controller in the cluster is responsible for collecting all the VTEPs information reported from all transport nodes, and replicate the updated VTEP information to all other transport nodes. When a controller fails, a new controller is elected as the new master to take the responsibility and all transport nodes connected to original master are migrated to the new master and data is synced between the transport nodes and the controllers.

If you add new cluster to transport zone, NSX Manager pushes the CDO mode and VNI to the newly added hosts. If you remove the cluster, NSX Manager removes the VNI data from the hosts.

When you disable the CDO mode on transport zone, NSX Manager removes CDO logical switch.

In Cross-vCenter NSX environment, you can enable CDO mode only on the local transport zones or in a topology where you do not have any local transport zones and have a single universal transport zone for the primary NSX Manager. The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers.

You can enable CDO mode on local transport zone for secondary NSX Manager.

■

Primary NSX Manager: You can enable CDO mode only on transport zones that do not share the same distributed virtual switch. If the universal transport zone and the local transport zones share the same distributed virtual switch, then CDO mode can be enabled only on the universal transport zone.

■

Secondary NSX Manager: The CDO mode gets replicated on the universal transport zone for all secondary NSX Managers. You can enable CDO mode on local transport zones, if they do not share the same distributed virtual switch.

Cross-vCenter NSX Active-Standby DFW Enhancements:

  • Multiple Universal DFW sections are now supported. Both Universal and Local rules can consume Universal security groups in source/destination/AppliedTo fields.
  • Universal security groups: Universal Security Group membership can be defined in a static or dynamic manner. Static membership is achieved by manually adding a universal security tag to each VM. Dynamic membership is achieved by adding VMs as members based on dynamic criteria (VM name).
  • Universal Security tags: You can now define Universal Security tags on the primary NSX Manager and mark for universal synchronization with secondary NSX Managers. Universal Security tags can be assigned to VMs statically, based on unique ID selection, or dynamically, in response to criteria such as antivirus or vulnerability scans.
  • Unique ID Selection Criteria: In earlier releases of NSX, security tags are local to a NSX Manager, and are mapped to VMs using the VM’s managed object ID. In an active-standby environment, the managed object ID for a given VM might not be the same in the active and standby datacenters. NSX 6.3.x allows you to configure a Unique ID Selection Criteria on the primary NSX Manager to use to identify VMs when attaching to universal security tags: VM instance UUID, VM BIOS UUID, VM name, or a combination of these options.

    • Unique ID Selection Criteria
      •  ■

        Use Virtual Machine instance UUID (recommended) – The VM instance UUID is generally unique within a VC domain, however there are exceptions such as when deployments are made through snapshots. If the VM instance UUID is not unique we recommend you use the VM BIOS UUID in combination with the VM name.

        ■

        Use Virtual Machine BIOS UUID – The BIOS UUID is not guaranteed to be unique within a VC domain, but it is always preserved in case of disaster. We recommend you use BIOS UUID in combination with VM name.

        ■

        Use Virtual Machine Name – If all of the VM names in an environment are unique, then VM name can be used to identify a VM across vCenters. We recommend you use VM name in combinationwith VM BIOS UUID.

 

 

Control Plane Agent (netcpa) Auto-recovery: An enhanced auto-recovery mechanism for the netcpa process ensures continuous data path communication. The automatic netcpa monitoring process also auto-restarts in case of any problems and provides alerts through the syslog server. A summary of benefits:

  • automatic netcpa process monitoring.
  • process auto-restart in case of problems, for example, if the system hangs.
  • automatic core file generation for debugging.
  • alert via syslog of the automatic restart event.

Log Insight Content Pack: This has been updated for Load Balancer to provide a centralized Dashboard, end-to-end monitoring, and better capacity planning from the user interface (UI).

Drain state for Load Balancer pool members: You can now put a pool member into Drain state, which forces the server to shutdown gracefully for maintenance. Setting a pool member to drain state removes the backend server from load balancing, but still allows the server to accept new, persistent connections.

4-byte ASN support for BGP: BGP configuration with 4-byte ASN support is made available along with backward compatibility for the pre-existing 2-byte ASN BGP peers.

 

Improved Layer 2 VPN performance: Performance for Layer 2 VPN has been improved. This allows a single Edge appliance to support up to 1.5 Gb/s throughput, which is an improvement from the previous 750 Mb/s.

Improved Configurability for OSPF: While configuring OSPF on an Edge Services Gateway (ESG), NSSA can translate all Type-7 LSAs to Type-5 LSAs.

DFW timers: NSX 6.3.0 introduces Session Timers that define how long a session is maintained on the firewall after inactivity. When the session timeout for the protocol expires, the session closes. On the firewall, you can define timeouts for TCP, UDP, and ICMP sessions and apply them to a user defined set of VMs or vNICs.

Linux support for Guest Introspection: NSX 6.3.0 enables Guest Introspection for Linux VMs. On Linux-based guest VMs, NSX Guest Introspection feature leverages fanotify and inotify capability provided by the Linux Kernel.

Publish Status for Service Composer: Service Composer publish status is now available to check whether a policy is synchronized. This provides increased visibility of security policy translations into DFW rules on the host.

NSX kernel modules now independent of ESXi version: Starting in NSX 6.3.0, NSX kernel modules use only the publicly available VMKAPI so that the interfaces are guaranteed across releases. This enhancement helps reduce the chance of host upgrades failing due to incorrect kernel module versions. In earlier releases, every ESXi upgrade in an NSX environment required at least two reboots to make sure the NSX functionality continued to work (due to having to push new kernel modules for every new ESXi version).

Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded to NSX 6.3.0, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This includes the NSX 6.3.x VIB install that is required after an ESXi upgrade, any NSX 6.3.x VIB uninstall, and future NSX 6.3.0 to NSX 6.3.x upgrades. Upgrading from NSX versions earlier than 6.3.0 to NSX 6.3.x still requires that you reboot hosts to complete the upgrade. Upgrading NSX 6.3.x on vSphere 5.5 still requires that you reboot hosts to complete the upgrade.

Happy learning 🙂

 

 

 

VMware NSX for vSphere 6.2.5 Released

VMware has released NSX for vSphere 6.2.5, this is a maintenance release that contains the bug fixes

The 6.2.5 release is a bugfix release that addresses a loss of network connectivity after performing certain vMotion operations in installations where some hosts ran a newer version of NSX, and other hosts ran an older version of NSX.

Details of fixes can be checked Here

But remember still vSphere 6.5 is not supported !!!!!!!!!!

QoS Tagging and Traffic Filtering on vDS

There are two types of QoS Marking/Tagging common in networking are 802.1p (COS) applied on Ethernet(Layer 2) packets and Differentiated Service Code Point (DSCP) Marking applied on IP packets. The physical network devices use these tags to identify important traffic types and provide Quality of Service based on the value of the tag. As business critical and latency sensitive applications are virtualized and run in parallel with other applications on ESXi hosts, it is important to enable traffic management and tagging features on the VDS.

The traffic management feature on the vDS helps reserve bandwidth for important traffic types, and the tagging feature allows the external physical network to understand the level of importance of each traffic type. It is a best practice to tag the traffic near the source to help achieve end-to-end Quality of Service (QoS). During network congestion scenarios, the tagged traffic doesn’t get dropped which translates to a higher Quality of Service (QoS) for the tagged traffic.

Once the packets are classified based on the qualifiers described in the traffic filtering section, users can choose to perform Ethernet (layer2) or IP (layer 3) header level marking. The markings can be configured at the port group level.

Lets Configure this , so that i can ensure that my business critical VMs are getting higher priority at physical layer…

Login to the Web Client and  click on dvSwitch and choose Port Group on which you want to apply TAG:

  1. Click on Manage tab
  2. Select the Settings option
  3. Select Policies
  4. Click on Edit

Qos1.png

  1. Click on Traffic filtering and marking
  2. In the Status drop down box choose Enabled
  3. Click the Green + to add a New Network Traffic Rule

qos2

  1. In the Action: drop down box select Tag (default)
  2. Check the box to the right of DSCP value
  3. In the drop down box for the DSCP value select Maximum 63
  4. In the Traffic direction drop down box select Ingress
  5. Click the Green +

qos3

New Network Traffic Rule – Qualifier

Now that you have decided to tag the traffic the next question is which traffic you would like to tag.There are three options available while defining the qualifier:

  • System Traffic Qualifier
  • New MAC qualifier
  • New IP Qualifier

This means you have options to select packets based on system traffic types, MAC header or IP header fields. here we will create qualifier based on system traffic.

Select New System Traffic Qualifier from the drop down menu qos4.gif

  1. Select Virtual Machine
  2. Click OK

qos5

Check that your rule matches Name: Network Traffic Rule 1

  1. Action: Tag
  2. DSCP Value: Checked
  3. DSCP Value: 63 Traffic
  4. Direction: Ingress
  5. System traffic: Virtual Machine
  6. Click OK

qos6

Same way you can also allow/block the traffic:

Again go to dvPort – Settings – Policies –  Edit – Traffic Filtering and Marking and edit the existing rule that we have created above and change Action to Drop.

qos7

  1. Click the Green + to add a new qualifier.
  2. Select New IP Qualifier… from the drop down list.

qos8

  1. Select ICMP from the Protocol drop down menu.
  2.  Select Source address IP address is of your VM , in my case it is 192.168.100.90
  3. Click OK

qos9

  1. and finally Click OK , this will drop the ping for that particular VM.

So same way you can write many rules with various permutation and combinations to help your organisation to achive QoS and traffic filtering on dVS. I hope this helps you in your environments. Happy Learning 🙂

Learn NSX – Part-13 (Configure OSPF for DLR)

Configuring OSPF on a logical router enables VM connectivity across logical routers and from logical routers to edge services gateways (ESGs).

OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal cost.An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing tables. and an area is a logical collection of OSPF networks, routers, and links that have the same area identification. basically areas are identified by an Area ID.

Before we proceed with OSPF configuration , first on our deployed DLR, a Router ID must be configured , a to configure Router ID:

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3 – Double-click the distributed logical router on which to configure OSPF.

1.gif

4 – On the Manage tab:

  1. Select the Routing
  2. Select Global Configuration section from the options on the left.
  3. Under the Dynamic Routing Configuration section, click Edit.

2.gif

5 – In the Edit Dynamic Routing Configuration dialog box:

  1. Select an interface from the Router ID drop-down menu to use as the OSPF Router ID.
  2. Select the Enable Logging check box.
  3. Select Info from the Log Level drop-down menu.
  4. Click OK.

4

6 – Click Publish Changes.

5

Now you have configured Router ID , now let’s configure OSPF:

1 – Select the OSPF section from the options on the left:

  1. Under the Area Definitions section, click the green + icon to add an OSPF area.

6

2 – In the New Area Definition dialog box:

  1. Enter the OSPF area ID in the Area ID text box.
  2. Select the required OSPF area type from the Type drop-down menu:
    1. Normal
    2. NSSA, which prevents the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
    3. Select the required authentication type from the Authentication drop-down menu and enter a password, this is optional.
    4. Click OK.

7

3 – Under the Area to Interface Mapping section, click the green + icon.

8

4 – In the New Area to Interface Mapping dialog box:

  1. Select an appropriate uplink interface in the Interface drop-down menu.
  2. Select the OSPF area from the Area drop-down menu.
  3. Enter 1 in the Hello Interval text box.
  4. Enter 3 in the Dead Interval text box.
  5. Enter 128 in the Priority text box.
  6. Enter 1 in the Cost text box.
  7. Click OK.

9

5 – Under the OSPF Configuration section, click Edit.

  1. In the OSPF Configuration dialog box:
  2. Select the Enable OSPF check box.
  3. Enter the OSPF protocol address in the Protocol Address text box.
  4. Enter the OSPF forwarding address in the Forwarding Address text box.
  5. Select the Enable Graceful Restart check box for packet forwarding to be uninterrupted during restart of OSPF services.
  6. Select the Enable Default Originate check box to allow the NSX Edge to advertise itself as a default gateway to its peers .(optional)
  7. Click OK.

1010-1

6 – Click Publish Changes.

11

7 – Select the Route Redistribution section from the options on the left:

  1. Click Edit.
  2. Select OSPF.
  3. Click OK.

12.gif

12-1

8 – Under the Route Redistribution table section, click the green + In the New Redistribution criteria dialog box:

  1. Select the IP prefixes from the Prefix Name drop-down menu
  2. Select OSPF from the Learner Protocol drop-down menu.
  3. Select the necessary appropriate Allow Learning From
  4. Select Permit from the Action drop-down menu.
  5. Click OK.

13

9 – Click Publish Changes.

14

Our Lab topology will be as below , in my other posts , where i have created Logical switches , add one interface to this DLR as per below topology , this will allow VMs connected to both the logical switches will now be able to talk to each other because the logical router’s connected routes (172.16.10.0/24 and 172.16.20.0/24) are advertised into OSPF.

15

 

VMFS 6

New vSphere 6 .5  has been release with New File system after VMFS 5 released in 2011. Good news is that VMFS6 supports 512 Devices and 2048 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon.

Automatic Space Reclamation is the next feature that many of customers/administrators have been waiting for. basically this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. using Unmap storage capacity is reclaimed and released to the array so that when needed other volumes can use those blocks. Previously we need to run Unmap command to reclaim  the blocks, now this has been integrated in the UI and can be turned on or off from GUI.

SESparse will be the snapshot format supported on VMFS6, vSphere will not be supporting VMFSparse snapshot format in VMFS6, it will continue to be supported on VMFS5. Both VMFS 6 and VMFS 5 can co-exist but there is no straight forward  upgrade from VMFS5 to VMFS6. After you upgrade your ESXi hosts to version 6.5, you can continue using any existing VMFS5 datastores. To take advantage of VMFS6 features, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6 datastore. You cannot upgrade the VMFS5 datastore to VMFS6. right now SE Sparse is used primarily for View and for LUNs larger than 2TB. with VMFS 6 the default will be SE Sparse.

Comparison is as below:

Features and Functionalities VMFS5 VMFS6
Access for ESXi 6.5 hosts Yes Yes
Access for ESXi hosts version 6.0 and earlier Yes No
Datastores per Host 1024 1024
512n storage devices Yes (default) Yes
512e storage devices Yes. Not supported on local 512e devices. Yes (default)
Automatic space reclamation No Yes
Manual space reclamation through the esxcli command. Yes Yes
Space reclamation form guest OS Limited Yes
GPT storage device partitioning Yes Yes
MBR storage device partitioning Yes

For a VMFS5 data store that has been previously upgraded from VMFS3.

No
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes
Support of small files of 1 KB Yes Yes
Default use of ATS-only locking mechanisms on storage devices that support ATS. Yes Yes
Block size Standard 1 MB Standard 1 MB
Default snapshots VMFSsparse for virtual disks smaller than 2 TB.

SEsparse for virtual disks larger than 2 TB.

SEsparse
Virtual disk emulation type 512n 512n
vMotion Yes Yes
Storage vMotion across different datastore types Yes Yes
High Availability and Fault Tolerance Yes Yes
DRS and Storage DRS Yes Yes
RDM Yes Yes

Happy Learning 🙂

Creating IP Sets with NSX API Calls using PowerShell

Fed days back one of my close friend working with a customer , whose NSX environment is  crossvCenter NSX environment , because of this they have to create IP sets as IP Sets are the universal objects which get synchronized with secondary instance of NSX. since this IP Sets list was big so we were looking for automating using NSX API but don’t want to make many API calls manually , so we have decided to do it using PowerShell way and we decided to use Invoke-RestMethod , which was introduced in PowerShell 3.0

Invoke-RestMethod

The Invoke-RestMethod cmdlet sends HTTP and HTTPS requests to Representational State Transfer (REST) web services that returns richly structured data.Windows PowerShell formats the response based to the data type. For an RSS or ATOM feed, Windows PowerShell returns the Item or Entry XML nodes. For JavaScript Object Notation (JSON) or XML, Windows PowerShell converts (or deserializes) the content into objects.

CSV File Format used in Script:

1.GIF

Script to Create IP Sets from CSV is as below:

2

Result:

3

4

Finally ,using Invoke-RestMethod or Invoke-WebRequest , we can automate many of routine tasks of NSX using PowerShell. Hope this helps in your daily routine administrative tasks  :).

 

Learn NSX – Part-12 (Create NSX Edge Services Gateway)

Each NSX Edge virtual appliance can have a total of 10 uplink and internal network interfaces.Overlapping IP addresses are not allowed for internal interfaces, and overlapping subnets are not allowed for internal and uplink interfaces.

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3- Click the green + icon to add a new NSX Edge.

1

4- In the Name and description dialog box:

  1. Select Edge Services Gateway as the Install Type.
  2. Enter a name for the NSX Edge services gateway in the Name text box. The name should be unique across all NSX Edge services gateways within a tenant.
  3. Enter a host name for the NSX Edge services gateway in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Confirm that Deploy NSX Edge is selected (default).
  7. Select Enable High Availability to enable and configure high availability.
  8. Click Next.

2

5 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes. The password must be 12 to 255 characters and must contain the following:
    1. At least one upper case letter
    2. At least one lower case letter
    3. At least one number
    4. At least one special character
  3. Select the Enable SSH access check box.
  4. Select the Enable auto rule generation check box.
  5. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  6. Click Next.

3

6 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop-down menu.
  2. Select the appropriate Appliance Size.
  3. Click the green + icon in the NSX Edge Appliances
  4. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  5. Select the datastore from the Datastore drop-down menu.
  6. (Optional) Select the host from the Host drop-down menu.
  7. (Optional) Select the folder from the Folder drop-down menu.
  8. Click OK and click Next.

4-1

4-2

7 – In the Configure Interfaces dialog box:

  1. Under the Configure interfaces of this NSX Edge section, click the green + icon to create an interface.
    1. NOTE – You must add at least one internal interface for HA to work.
  2. Enter the NSX Edge interface name in the Name text box.
  3. Select Internal or Uplink as the
  4. Click Change next to the Connected To selection box to choose the appropriate logical switch, standard port group or distributed port group with which to connect the interface.
  5. Select Connected for the Connectivity Status.
  6. Assign a primary IP address and subnet prefix length.
  7. Select the appropriate options.
  8. Select Enable Proxy ARP for overlapping network forwarding between different interfaces. Select Send ICMP Redirect to convey routing information to hosts.
  9. Click OK and click Next.

5-15-2.gif

7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.

6

8 – In the Firewall and HA dialog box:

  1. Select the Configure Firewall default policy check box.
  2. Select Accept for Default Traffic Policy.
  3. Select Disable for Logging.
  4. (Optional) If high availability is enabled, complete the Configure HA parameters By default, HA automatically chooses an internal interface and automatically assigns link-local IP addresses.
  5. Click Next.

7

NOTE – If ANY is selected for the high availability interface but there are no internal interfaces configured, the user interface will not display an error. Two NSX Edge appliances will be created, but because there is no internal interface configured, the new NSX Edge appliances remain in standby and high availability is disabled. After an internal interface is configured, high availability will be enabled on the NSX Edge appliances.

9 – In the Ready to complete dialog box, review the configuration and click Finish.

8

Happy Learning 🙂

Learn NSX – Part-11 (Create Distributed Logical Router)

In the Previous post, We have discussed about creating logical switches and now workloads have L2 adjacency across IP subnets with the help of VXLAN. In this post, we are going to enable routing between multiple Logical switches.

Topology:

12

An NSX Edge logical router provides routing and bridging functionality. With distributed routing, virtual machines that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface.

Prerequisites before deploying DLR

  • At least one deployed NSX Controller node
  • At least one logical switch

NOTE – A DLR router instance cannot be connected to logical switches that exist in different transport zones.

1 – Log in to the vSphere Web Client and click Networking & Security.

2 – Select NSX Edges under the Networking & Security section.

1

Click the green + icon to add a new NSX Edge.

2.gif

3 – In the Name and description dialog box:

  1. Select Logical (Distributed) Router as the Install Type.
  2. Enter the name in the Name text box. This name appears in your vCenter inventory and should be unique across all logical routers within a single tenant.
  3. Enter a host name for the distributed logical router in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Deploy Edge Appliance is selected by default. An NSX Edge appliance is required for dynamic routing and the logical router. (This is only required only if you want to dynamic routing and firewalling).NOTE – An NSX Edge appliance cannot be added to the logical router after the logical router has been created.
  7. Select Enable High Availability to enable and configure high availability. High availability is required for dynamic routing.
  8. Click Next.

3

4 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes.
  3. Select the Enable SSH access check box.
  4. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  5. Click Next.

4

5 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop down list.
  2. Click the green + icon in the NSX Edge Appliances
  3. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  4. Select the datastore from the Datastore drop-down menu.
  5. Select the host from the Host drop down list (Optional).
  6. Select the folder from the Folder drop-down menu (Optional).
  7. Click OK and click Next.

5

6

6 – In the Configure Interfaces dialog box:

  1. Under the HA Interface Configuration section, click Select next to the Connected To selection box to choose the appropriate logical switch or distributed port group. Generally, this interface should be connected to the management distributed port group.
  2. Under the Configure interfaces of this NSX Edge section, click the green + icon to create a logical interface.
  3. Enter the logical router interface name in the Name text box.
  4. Select the Internal or Uplink radio button as the Type.
  5. Click Change next to the Connected To selection box to choose the appropriate logical switch with which to connect the interface.
  6. Select Connected for Connectivity Status.
  7. Under the Configure subnets section, click the green + icon and assign an IP address and subnet prefix length. Click OK.
  8. Repeat steps b through g for all interfaces to be created.
  9. Click OK and click Next.

7

89

7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.

10

8 – In the Ready to complete dialog box, review the configuration and click Finish.

11

NOTE – for HA interface do not use an IP address that exists somewhere else on your network, even if that network is not directly attached to the NSX Edge.

Happy Learning 🙂

Docker Engine for Windows Server 2016 Now Available

 

Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows.the expansion of the Docker platform beyond Linux workloads to support Windows Server applications. The Commercially Supported Docker Engine (CS Docker Engine) is now available at no additional cost with every edition of Windows Server 2016.

Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may also be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.

Once Windows Server 2016 is running, log in, run Windows Update to ensure you have all the latest updates and install the Windows-native Docker Engine directly (that is, not using “Docker for Windows”). Run the following in an Administrative PowerShell prompt:

# Add the containers feature and restart
Install-WindowsFeature containers
Restart-Computer -Force

# Download, install and configure Docker Engine
Invoke-WebRequest "https://download.docker.com/components/engine/windows-server/
cs-1.12/docker.zip" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing

Expand-Archive -Path "$env:TEMP\docker.zip" -DestinationPath $env:ProgramFiles

# For quick use, does not require shell to be restarted.
$env:path += ";c:\program files\docker"

# For persistent use, will apply even after a reboot. 
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\
Docker", [EnvironmentVariableTarget]::Machine)

# Start a new PowerShell prompt before proceeding
dockerd --register-service
Start-Service docker


For More Details Click Here

Guys Let’s learn it before it becomes main stream in the industry 🙂

 

NFV, SDN and VMware NSX

NFV

After Compute Virtualization Network Functions Virtualization (NFV) is the next step, moving physical networking devices/equipment and running it in a VM. NFV separates network functions from routers, firewalls, load balances and other dedicated hardware devices and allows network services to be hosted on virtual machines. Virtual machines have a hypervisor, also called a virtual machine manager, which allows multiple operating systems to share a single hardware processor. When the hypervisor controls network functions those services that required dedicated hardware can be performed on standard x86 servers.

SDN

SDN makes the network programmable by separating the control plane (telling the network what goes where) from the data plane (sending packets to specific destinations). It relies on switches that can be programmed through an SDN controller using an industry standard control protocol, such as Open Flow.

SDN NFV
SDN separates control and data and centralizes control and programmability of the network. NFV transfers network functions from dedicated appliances to generic servers.
Network intelligence is logically centralized in SDN controller software that maintains a global view of the network, which appears to applications and policy engines as a single, logical switch. As virtual device, they require no extra space, power or cooling.
SDN operates in a campus, data center and/or cloud environment Physical devices often rapidly reach EOL or run out of capacity and need to be upgraded to large models
SDN software targets cloud orchestration and networking NFV targets the service provider network
  NFV software targets routers, firewalls, gateways, WAN, CDN, accelerators and SLA assurance

SDN and NFV Are Better Together

 These approaches are mutually beneficial, but are not dependent on one another. You do not need one to have the other. However, the reality is SDN makes NFV and NV more compelling and visa-versa. SDN contributes network automation that enables policy-based decisions to orchestrate which network traffic goes where, while NFV focuses on the services, and NV ensures the network’s capabilities align with the visualized environments they are supporting.

VMware NSX

VMware NSX is a hypervisor networking solution designed to manage, automate, and provide basic Layer 4-7 services to virtual machine traffic. NSX is capable of providing switching, routing, and basic load-balancer and firewall services to data moving between virtual machines from within the hypervisor. For non-virtual machine traffic (handled by more than 70% of data center servers), NSX requires traffic to be sent into the virtual environment. While NSX is often classified as an SDN solution, as per my understanding that is really not the case.

SDN is defined as providing the ability to manage the forwarding of frames/packets and apply policy; to perform this at scale in a dynamic fashion; and to be programmed. This means that an SDN solution must be able to forward frames. Because NSX has no hardware switching components, it is not capable of moving frames or packets between hosts, or between virtual machines and other physical resources. In my view, this places VMware NSX into the Network Functions Virtualization (NFV) category. NSX virtualizes switching and routing functions, with basic load-balancer and firewall functions.

UseCases:

Security:

NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.

Automation:

VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the data center using NSX.

NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.

Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack.

Application Continuity:

NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center in conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.

NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.

Features:

Switching:

Logical switching enables extension of a L2 segment / IP subnet anywhere in the fabric independent of the physical network design.

Routing:

Routing between IP subnets can be done in the logical space without traffic leaving the hypervisor; routing is performed directly in the hypervisor kernel with minimal CPU / memory overhead. This distributed logical routing (DLR) provides an optimal data path for traffic within the virtual infrastructure means east-west communication. Additionally, the NSX Edge provides an ideal centralized point for seamless integration with the physical network infrastructure to handle communication with the external network (i.e., north-south communication) with ECMP-based routing.

Connectivity to physical networks:

L2 and L3 gateway functions are supported within NSX to provide communication between workloads deployed in logical and physical spaces.

Edge Firewall:

Edge firewall services are part of the NSX Edge Services Gateway (ESG). The Edge firewall provides essential perimeter firewall protection which can be used in addition to a physical perimeter firewall. The ESG-based firewall is useful in developing PCI zones, multi-tenant environments, or dev-ops style connectivity without forcing the inter-tenant or inter-zone traffic onto the physical network.

VPN:

L2 VPN, IPSEC VPN, and SSL VPN services to enable L2 and L3 VPN services. The VPN services provide critical use-case of interconnecting remote datacenters and users access.

Logical Load-balancing:

L4-L7 load balancing with support for SSL termination. The load-balancer comes in two different form factors supporting inline as well as proxy mode configurations. The load-balancer provides critical use case in virtualized environment, which enables devops style functionalities supporting variety of workload in topological independent manner.

DHCP & NAT Services:

Support for DHCP servers and DHCP forwarding mechanisms; NAT services. NSX also provides an extensible platform that can be used for deployment and configuration of 3rd party vendor services. Examples include virtual form factor load balancers (e.g., F5 BIG-IP LTM) and network monitoring appliances (e.g.,Gigamon – GigaVUE-VM).
Integration of these services is simple with existing physical appliances such as physical load balancers and IPAM/DHCP server solutions.

Distributed Firewall:

Security enforcement is done directly at the kernel and vNIC level. This enables highly scalable firewall rule enforcement by avoiding bottlenecks on physical appliances. The firewall is distributed in kernel, minimizing CPU overhead while enabling line-rate performance.

NSX also provides an extensible framework, allowing security vendors to providean umbrella of security services. Popular offerings include anti-virus/antimalware/anti-bot solutions, L7 firewalling, IPS/IDS (host and network based)services, file integrity monitoring, and vulnerability management of guest VMs.

I hope this will help you understanding what NSX does and its use case. Comments are welcome to make this post more accurate and interesting…:)

Learn NSX – Part-10 (Create Logical Switches)

A logical switch reproduces Layer 2 and Layer 3 functionality (unicast, multicast, or broadcast) in a virtual environment completely decoupled from underlying hardware.

  • Select Logical Switches under the Networking & Security section.
  • Click the green + icon to add a new logical switch.
  • 1.gif
  • In the New Logical Switch dialog box:
  • 2
    • Enter the logical switch name in the Name text box.
    • Enter a description in the Description text box. For ease of use, add the subnet that will be used within this logical switch.
    • In the Transport Zone selection box, click Change to choose the appropriate transport zone.
    • Leave the Replication Mode option, or select a new one if you want to overlap the transport zone replication mode. By default, the logical switch inherits the control plane mode from the transport zone.
    • Select the Enable IP Discovery check box to enable ARP suppression. This minimizes ARP flooding within individual VXLAN segments.
    • Select the Enable MAC Learning check box to avoid possible traffic loss during VMware vSphere vMotion.
    • Click OK.

Happy Learning 🙂

In My Previous post , i have demonstrated that how can we create LS using API , for details please check this Post

VMware Validated Design (SDDC) Available for download

The VMware Validated Design provides a set of prescriptive documents that explain how to plan, deploy, and configure a deployment of a Software-Defined Data Center (SDDC). This design supports a number of use cases, and is designed to minimize difficulty in integration, expansion, and operation, as well as future updates and upgrades.

In my opinion this is really going to help architects and administrators to align the design and deployment based on VMware recommendations that will help customers to more robust , highly available and supported environment.

Download 

Learn NSX – Part-09 (Transport Zone)

The Segment ID Pool specifies a range of VXLAN Networks Identifiers (VNIs) to use when building logical network segments. This determines the maximum number of logical switches that can be created in your infrastructure.

  • You must specify a segment ID pool for each NSX Manager to isolate your network traffic.

A transport zone defines the span of a logical switch by delineating the width of the VXLAN/VTEP replication scope and control plane. It can span one or more vSphere clusters. You can have one or more transport zones based on your requirements.

  • Log in to the vSphere Web Client and click Networking & Security.
  • Select Installation under the Networking & Security section and select the Logical Network Preparation tab.
  • Select the Segment ID menu and click Edit.
    • 1
  • Enter the range of numbers to be used for VNIs in the Segment ID pool text box and click OK.
    • 2
  • Select the Transport Zones menu and click the green + icon to add a new transport zone.
    • 3
  • In the New Transport Zone dialog box:
    • Enter the name of the transport zone in the Name text box.
    • (Optional) Add a description in the Description text box.
    • Depending on whether you have NSX Controller nodes in your environment or want to use multicast addresses, select the appropriate Replication mode (also known as the control plane mode).
    • Select the check boxes for each cluster to be added to the transport zone.
    • Click OK.
    • 4

We have now completed pre-requisite for virtual network deployment , in the next few posts i  will help you to deploy logical switching , routing etc…

Happy Learning 🙂

Containers in Windows Server 2016 TP5

I am just starting to get my head around the concept of Containers and decided to take Windows Server 2016 Technical Preview 5 for a test, which includes Docker Containers as a feature. It’s potentially worth being more explicit here (for anyone not aware) that this isn’t Microsoft’s version of Containers, this is docker container.

There are two types of containers supported by MS.

Windows Containers

  • Multiple container instances can run concurrently on a host, with isolation provided through namespace, resource control, and process isolation technologies. Windows Server containers share the same kernel with the host, as well as each other.

Hyper-V Containers

  • Multiple container instances can run concurrently on a host; however, each container runs inside of a special virtual machine. This provides kernel level isolation between each Hyper-V container and the container host.

Lets Deploy our first windows container host:

First Install Container Feature

The container feature needs to be enabled before working with Windows containers. To do so run the following command in an elevated PowerShell session.

Install-WindowsFeature containers

When the feature installation has completed, reboot the computer.

Restart-Computer -Force

Next Install Docker

Docker is required in order to work with Windows containers. Docker consists of the Docker Engine, and the Docker client. For this exercise, both will be installed.

New-Item -Type Directory -Path ‘C:\Program Files\docker\’

Invoke-WebRequest https://aka.ms/tp5/b/dockerd -OutFile $env:ProgramFiles\docker\dockerd.exe

Invoke-WebRequest https://aka.ms/tp5/b/docker -OutFile $env:ProgramFiles\docker\docker.exe

[Environment]::SetEnvironmentVariable(“Path”, $env:Path + “;C:\Program Files\Docker”, [EnvironmentVariableTarget]::Machine)

The commands above create a new Docker folder in Program Files and download the docker deamon and client binaires in this directory. The last one adds the directory in the Path environment variable.

To install Docker as a Windows service, run the following.

dockerd --register-service

Once installed, the service can be started.

Start-Service Docker

Install Base Container Images

Before working with Windows Containers, a base image needs to be installed. Base images are available with either Windows Server Core or Nano Server as the underlying operating system.

To install the Windows Server Core base image run the following:

Install-PackageProvider ContainerImage -Force
Install-ContainerImage -Name WindowsServerCore

 

Restart the Docker service:

Restart-Service

1

Upgrade NSX to 6.2.4 – Step by Step

Before we upgrade

It is very important to validate all necessary components before NSX is upgraded.i have already shared upgrade impact in my previous post , also below checklist will help you to prepare your environment to upgrade:

  • Download NSX upgrade bundle and check MD5.
  • if you are upgrading from 6.2.2 to 6.2.3/6.2.4. , there is known bug which affects EDGE upgrade when incorrect ciphers are configured. VMware KB NSX Edge is unmanageable after upgrading to NSX 6.2.3 explain steps needed to check it and/or change it before the upgrade.
  • Understand the  Update sequence for vSphere 6.0 and its compatible VMware products.
  • Backup of components:
    • Take a backup of NSX Manager (if you are not doing it regularly) before the upgrade.
    • Before upgrading download technical support logs.
    • Export of vSphere Distributed Switches configuration.
    • Create a backup of vCenter Server database.
    • Take a snapshot of vCenter Server and vCenter Server database.
    • Take a snapshot of NSX Manager (without quiescing VMware Tools, because it is not supported and it might crash your NSX Manager).

Let’s Start…

  1. Login to NSX Manager and go to Upgrade section.
  2. Click on Upgrade button, specify a location of upgrade bundle and click Continue.
  3. 3
  4. It will take some time to upload it..
  5. Choose if you want to enable SSH and participate in Customer Experience Improvement Program . Click Upgrade to proceed.2
  6. Upgrade takes a some time , so be patient.
  7. GUI reports that upgrade has been done.4

NSX Controllers Upgrade

  1. Go back to vSphere Web Client and go to Networking&Security and go to Installation and Click on Upgrade Available button.55
  2. Confirm “YES”that we want to proceed with Controllers upgrade.
  3. It will take some time to upgrade controllers. In my case, it was around 10 minutes per controller.
  4. 6

VMware ESXi Host NSX VIBs Upgrade

  1. Navigate to Installation section and click Host Preparation.7
  2. Click on Upgrade Available in every cluster you are using NSX and confirm Upgrade by clicking Yes.
  3. In vSphere Client uninstalls and install tasks will be visible , basically process uninstalls old vibs and install new vibs. and as you are aware un-installation a vib require a Reboot, so all your hosts , will get rebooted. if you have DRS enabled cluster , NSX will do all the magic automatically , it will put a Host in maintenance mode and reboot and once this completes , it will move to other host in the cluster. once all upgraded , you will see like this…8

NSX Edge Services Gateway and Distributed Logical Router Upgrade.

  1. Next step is to upgrade Edge Services Gateway and Distributed Logical Router.
  2. Navigate to NSX Edges and Right click on a edge and Click on “Upgrade version”
  3. 9
  4. Please remember that there might be service interruption depending on a configuration used in environment.
  5. In vSphere Web Client in current tasks, it will be visible, that two temporary new Edge Services Gateways deployment will be in progress and after some time all the edges will be upgraded to version 6.2.4.10
  6. Same procedure will be used to perform an upgrade of other edges.

Post-Upgrade Checklist

  • Check if NSX Manager is working.
  • Check if NSX Manager backup is working.
  • Check if NSX VIBs are installed on ESXi host.
  • Resynchronize the host message bus.
  • Remove snapshot from NSX Manager.
  • Remove snapshot from vCenter Server.
  • Remove snapshot from vCenter Server database.

Happy learning 🙂

Operational Impacts of NSX Upgrades

The NSX upgrade process can take some time, especially when upgrading ESXi hosts, because hosts must be rebooted. It is important to understand the operational state of NSX components during an upgrade, such as when some but not all hosts have been upgraded, or when NSX Edges have not yet been upgraded. VMware recommends that you run the upgrade in a single outage window to minimize downtime and reduce confusion among NSX users who cannot access certain NSX management functions during the upgrade. However, if your site requirements prevent you from completing the upgrade in a single outage window, the information below can help your NSX users understand what
features are available during the upgrade.

An NSX deployment upgrade proceeds as follows:

NSX Manager —> NSX Controller Cluster —> NSX Host Clusters —> NSX Edges

NSX Manager Upgrade

During:

NSX Manager Configuration is blocked. The NSX API service is unavailable. No changes to
the NSX configuration can be made. Existing VM communication continues to function. NewVM provisioning continues to work in vSphere, but the new VMs cannot be connected to NSX logical switches during the NSX Manager upgrade.

After:

All NSX configuration changes are allowed. At this stage, if any new NSX Controllers are
deployed, they will boot with the old version until the existing NSX Controller cluster is
upgraded. Changes to the existing NSX configuration are allowed. New logical switches, logical routers, and edge service gateways can be deployed. For distributed firewall if new features are introduced after the upgrade, those are unavailable for configuration (greyed out) in the user interface until all hosts are upgraded.

NSX Controllerr Cluster Upgrade

During:

  • Logical network creation and modifications are blocked during the upgrade process. Do not make logical network configuration changes while the NSX Controller cluster upgrade is in progress.
  • Do not provision new VMs during this process. Also, do not move VMs or allow DRS to move VMs during the upgrade.
  • During the upgrade, when there is a temporary non-majority state, existing virtual machines do not lose networking.
  • New logical network creation is automatically blocked during the upgrade.
  • Do not allow dynamic routes to change during the upgrade.

After:

  • Configuration changes are allowed. New logical networks can be created. Existing logical networks continue to function.

NSX Host Upgrade

During:

  • Configuration changes are not blocked on NSX Manager. Upgrade is performed on a per-cluster basis. If DRS is enabled on the cluster, DRS manages the upgrade order of the hosts. Adds and changes to logical network are allowed. The host currently undergoing upgrade is in maintenance mode. Provisioning of new VMs continues to work on hosts that are not currently in maintenance mode.

When some NSX hosts in a cluster are upgraded and others are not:

  • NSX Manager Configuration changes are not blocked. Controller-to-host communication is backward compatible, meaning that upgraded controllers can communicate with non-upgraded hosts. Additions and changes to logical networks are allowed. Provisioning new VMs continues to work on hosts that are not currently undergoing upgrade. Hosts currently undergoing upgrade are placed in maintenance mode, so VMs must be powered off or evacuated to other hosts. This can be done with DRS or manually.

 

NSX Edge Upgrade

NSX Edges can be upgraded without any dependency on the NSX Controller or host upgrades. You can upgrade an NSX Edge even if you have not yet upgraded the NSX Controller or hosts.

During:

  • On the NSX Edge device currently being upgraded, configuration changes are blocked. Additions and changes to logical switches are allowed. Provisioning new VMs continues to work.
  • Packet forwarding is temporarily interrupted.
  • In NSX Edge 6.0 and later, OSPF adjacencies are withdrawn during upgrade if graceful restart is not enabled.

After:

  • Configuration changes are not blocked. Any new features introduced in the NSX upgrade will not be configurable until all NSX Controllers and all host clusters have been upgraded to NSX version 6.2.x.
  • L2 VPN must be reconfigured after upgrade.
  • SSL VPN clients must be reinstalled after upgrade

Guest Introspection Upgrade

During an NSX upgrade, the NSX UI prompts you to upgrade Guest Introspection service.

 During:

  • There is a loss of protection for VMs in the NSX cluster when there is a change to the VMs, such as VM additions, vMotions, or deletions.

After:

  • VMs are protected during VM additions, vMotions, and deletions.

————————————————————————–

Verify the NSX Working State Before beginning and after the upgrade:

Before beginning the upgrade, it is important to test the NSX working state. Otherwise, you will not be able to determine if any post-upgrade issues were caused by the upgrade process or if they preexisted the upgrade process. Do not assume everything is working before you start to upgrade the NSX infrastructure. Make sure to check it first.

 Procedure

  1. Note the current versions of NSX Manager, vCenter Server, ESXi and NSX Edges.
  1. Identify administrative user IDs and passwords.
  1. Verify you can log into the following components:
  • vCenter Server
  • NSX Manager Web UI
  • Edge services gateway appliances
  • Distributed logical router appliances
  • NSX Controller appliances
  • Verify that VXLAN segments are functional.

Make sure to set the packet size correctly and include the don’t fragment bit.

  • Ping between two VMs that are on same logical switch but on two different hosts.
    1. From a Windows VM: ping -l 1472 –f <destVM>
    2. From a Linux VM: ping -s 1472 –M do <destVM>
  • Ping between two hosts’ VTEP interfaces.
    • ping ++netstack=vxlan -d -s 1572 Notђ To get a host’s VTEP IP, look up the vmknicPG IP address on the host’s Manage > Networking > Virtual Switches page.
  • Validate North-South connectivity by pinging out from a VM.
  • Visually inspect the NSX environment to make sure all status indicators are green/normal/deployed.
  1. Check Installation > Management.
  2. Check Installation > Host Preparation.
  3. Check Installation > Logical Network Preparation > VXLAN Transport.
  4. Check Logical Switches.
  5. Check NSX Edges.
  • Record BGP and OSPF states on the NSX Edge devices
  1. show ip ospf neighbor
  2. show ip bgp neighbor
  3. show ip route
  • If possible, in the pre-upgrade environment, create some new components and test their functionality.
  1. Create a new logical switch.
  2. Create a new edge services gateway and a new distributed logical router.
  3. Connect a VM to the new logical switch and test the functionality.
  • Validate netcpad and vsfwd user-world agent (UWA) connections.
  • On an ESXi host, run esxcli network vswitch dvs vmware vxlan network list –vds-name=<vds-name> and check the controller connection state.
  • On NSX Manager, run the show tech-support save session command, and search for “5671” to ensure that all hosts are connected to NSX Manager.
  • (Optional) If you have a test environment, test the upgrade and post-upgrade functionality before upgrading a production environment.