Monthly Archives: November 2016

Custom TCP/IP Stacks

vSphere at 5.1 and earlier, there was a single TCP/IP stack which was being used by all the different types of network traffics. This meant that management, virtual machine (VM) traffic, vMotion, NFC etc.. were all fixed to use the same stack. and because of the shared stack, all VMkernel interfaces had some things in common: they have to have the same default gateway, the same memory heap, and the same ARP and routing tables.

from vSphere 5.5 onward , VMware has changed this functionality, allowing for multiple TCP/IP stacks, but with some limitations. Only certain types of traffic could make use of a stack other than the default one, and the custom TCP stack had to be created from the command line using an ESXCLI command. 

In vSphere 6, VMware went ahead and created separate vMotion and Provisioning stacks by default , and when you deploy NSX , this will create its own stack.

NOTE – Custom TCP/IP stacks aren’t supported for use with fault tolerance logging, management traffic, Virtual SAN traffic, vSphere Replication traffic, or vSphere Replication NFC traffic.

Create a Custom TCP/IP Stack:

To create a new TCP/IP stack we must use the ESXCLI…

>esxcli network ip network add -N=”stack name”

1

2.GIF

Modify Custom TCP/IP Stack Settings:

Now since you have created your custom stack , it needs to be configured with settings like which DNS settings to use, which address to use as the default gateway etc. in advanced settings you can configure , which congestion control algorithm to use, or the maximum number of connections that can be active at any particular time.

To configure these settings, GO to Manage > Networking > TCP/IP configuration and highlight the your stack to be configured.

3.GIF

on the edit page , these settings can be modified:

  • Name. Change Name if Required.
  • DNS Configuration.use DHCP, so the custom TCP/IP stack can pickup the settings from DHCP or  static DNS servers with search domains.
  • Default Gateway. One of the primary reasons for creating a separate TCP/IP stack from the default one and this can be configured here.
  • Congestion Control Algorithm. The algorithm specified here affects vSphere’s response when congestion is suspected on the network stack.
  • Max Number of Connections. The default is set to 11,000.

45.GIF67.GIF

Configuring custom TCP/IP stacks also will help you have separate gateway for each kind of traffic if that’s the way network has been designed.

Advertisements

Learn NSX – Part-13 (Configure OSPF for DLR)

Configuring OSPF on a logical router enables VM connectivity across logical routers and from logical routers to edge services gateways (ESGs).

OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal cost.An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing tables. and an area is a logical collection of OSPF networks, routers, and links that have the same area identification. basically areas are identified by an Area ID.

Before we proceed with OSPF configuration , first on our deployed DLR, a Router ID must be configured , a to configure Router ID:

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3 – Double-click the distributed logical router on which to configure OSPF.

1.gif

4 – On the Manage tab:

  1. Select the Routing
  2. Select Global Configuration section from the options on the left.
  3. Under the Dynamic Routing Configuration section, click Edit.

2.gif

5 – In the Edit Dynamic Routing Configuration dialog box:

  1. Select an interface from the Router ID drop-down menu to use as the OSPF Router ID.
  2. Select the Enable Logging check box.
  3. Select Info from the Log Level drop-down menu.
  4. Click OK.

4

6 – Click Publish Changes.

5

Now you have configured Router ID , now let’s configure OSPF:

1 – Select the OSPF section from the options on the left:

  1. Under the Area Definitions section, click the green + icon to add an OSPF area.

6

2 – In the New Area Definition dialog box:

  1. Enter the OSPF area ID in the Area ID text box.
  2. Select the required OSPF area type from the Type drop-down menu:
    1. Normal
    2. NSSA, which prevents the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
    3. Select the required authentication type from the Authentication drop-down menu and enter a password, this is optional.
    4. Click OK.

7

3 – Under the Area to Interface Mapping section, click the green + icon.

8

4 – In the New Area to Interface Mapping dialog box:

  1. Select an appropriate uplink interface in the Interface drop-down menu.
  2. Select the OSPF area from the Area drop-down menu.
  3. Enter 1 in the Hello Interval text box.
  4. Enter 3 in the Dead Interval text box.
  5. Enter 128 in the Priority text box.
  6. Enter 1 in the Cost text box.
  7. Click OK.

9

5 – Under the OSPF Configuration section, click Edit.

  1. In the OSPF Configuration dialog box:
  2. Select the Enable OSPF check box.
  3. Enter the OSPF protocol address in the Protocol Address text box.
  4. Enter the OSPF forwarding address in the Forwarding Address text box.
  5. Select the Enable Graceful Restart check box for packet forwarding to be uninterrupted during restart of OSPF services.
  6. Select the Enable Default Originate check box to allow the NSX Edge to advertise itself as a default gateway to its peers .(optional)
  7. Click OK.

1010-1

6 – Click Publish Changes.

11

7 – Select the Route Redistribution section from the options on the left:

  1. Click Edit.
  2. Select OSPF.
  3. Click OK.

12.gif

12-1

8 – Under the Route Redistribution table section, click the green + In the New Redistribution criteria dialog box:

  1. Select the IP prefixes from the Prefix Name drop-down menu
  2. Select OSPF from the Learner Protocol drop-down menu.
  3. Select the necessary appropriate Allow Learning From
  4. Select Permit from the Action drop-down menu.
  5. Click OK.

13

9 – Click Publish Changes.

14

Our Lab topology will be as below , in my other posts , where i have created Logical switches , add one interface to this DLR as per below topology , this will allow VMs connected to both the logical switches will now be able to talk to each other because the logical router’s connected routes (172.16.10.0/24 and 172.16.20.0/24) are advertised into OSPF.

15

 

VMFS 6

New vSphere 6 .5  has been release with New File system after VMFS 5 released in 2011. Good news is that VMFS6 supports 512 Devices and 2048 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon.

Automatic Space Reclamation is the next feature that many of customers/administrators have been waiting for. basically this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. using Unmap storage capacity is reclaimed and released to the array so that when needed other volumes can use those blocks. Previously we need to run Unmap command to reclaim  the blocks, now this has been integrated in the UI and can be turned on or off from GUI.

SESparse will be the snapshot format supported on VMFS6, vSphere will not be supporting VMFSparse snapshot format in VMFS6, it will continue to be supported on VMFS5. Both VMFS 6 and VMFS 5 can co-exist but there is no straight forward  upgrade from VMFS5 to VMFS6. After you upgrade your ESXi hosts to version 6.5, you can continue using any existing VMFS5 datastores. To take advantage of VMFS6 features, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6 datastore. You cannot upgrade the VMFS5 datastore to VMFS6. right now SE Sparse is used primarily for View and for LUNs larger than 2TB. with VMFS 6 the default will be SE Sparse.

Comparison is as below:

Features and Functionalities VMFS5 VMFS6
Access for ESXi 6.5 hosts Yes Yes
Access for ESXi hosts version 6.0 and earlier Yes No
Datastores per Host 1024 1024
512n storage devices Yes (default) Yes
512e storage devices Yes. Not supported on local 512e devices. Yes (default)
Automatic space reclamation No Yes
Manual space reclamation through the esxcli command. Yes Yes
Space reclamation form guest OS Limited Yes
GPT storage device partitioning Yes Yes
MBR storage device partitioning Yes

For a VMFS5 data store that has been previously upgraded from VMFS3.

No
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes
Support of small files of 1 KB Yes Yes
Default use of ATS-only locking mechanisms on storage devices that support ATS. Yes Yes
Block size Standard 1 MB Standard 1 MB
Default snapshots VMFSsparse for virtual disks smaller than 2 TB.

SEsparse for virtual disks larger than 2 TB.

SEsparse
Virtual disk emulation type 512n 512n
vMotion Yes Yes
Storage vMotion across different datastore types Yes Yes
High Availability and Fault Tolerance Yes Yes
DRS and Storage DRS Yes Yes
RDM Yes Yes

Happy Learning 🙂

VMware vSphere Replication 6.5 released with 5 minute RPO

I was going through the release notes of vSphere Replication 6.5 and really surprised to see the RPO of this new release…and thought of worth sharing:

VMware vSphere Replication 6.5 provides the following new features:

  • 5-minute Recovery Point Objective (RPO) support for additional data store types – This version of vSphere Replication extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5. This allows customers to replicate virtual machine workloads with an RPO setting as low as 5-minutes between these various data store options.

surely  a great reason to upgrade to vSphere 6.5 and identify which all your VMs can work with 5 minute RPO and use this free replication feature of vSphere suite.

Creating IP Sets with NSX API Calls using PowerShell

Fed days back one of my close friend working with a customer , whose NSX environment is  crossvCenter NSX environment , because of this they have to create IP sets as IP Sets are the universal objects which get synchronized with secondary instance of NSX. since this IP Sets list was big so we were looking for automating using NSX API but don’t want to make many API calls manually , so we have decided to do it using PowerShell way and we decided to use Invoke-RestMethod , which was introduced in PowerShell 3.0

Invoke-RestMethod

The Invoke-RestMethod cmdlet sends HTTP and HTTPS requests to Representational State Transfer (REST) web services that returns richly structured data.Windows PowerShell formats the response based to the data type. For an RSS or ATOM feed, Windows PowerShell returns the Item or Entry XML nodes. For JavaScript Object Notation (JSON) or XML, Windows PowerShell converts (or deserializes) the content into objects.

CSV File Format used in Script:

1.GIF

Script to Create IP Sets from CSV is as below:

2

Result:

3

4

Finally ,using Invoke-RestMethod or Invoke-WebRequest , we can automate many of routine tasks of NSX using PowerShell. Hope this helps in your daily routine administrative tasks  :).

 

Learn NSX – Part-12 (Create NSX Edge Services Gateway)

Each NSX Edge virtual appliance can have a total of 10 uplink and internal network interfaces.Overlapping IP addresses are not allowed for internal interfaces, and overlapping subnets are not allowed for internal and uplink interfaces.

1- Log in to the vSphere Web Client and click Networking & Security.

2- Select NSX Edges under the Networking & Security section.

3- Click the green + icon to add a new NSX Edge.

1

4- In the Name and description dialog box:

  1. Select Edge Services Gateway as the Install Type.
  2. Enter a name for the NSX Edge services gateway in the Name text box. The name should be unique across all NSX Edge services gateways within a tenant.
  3. Enter a host name for the NSX Edge services gateway in the Hostname text box.
  4. Enter a description in the Description text box.
  5. Enter tenant details in the Tenant text box.
  6. Confirm that Deploy NSX Edge is selected (default).
  7. Select Enable High Availability to enable and configure high availability.
  8. Click Next.

2

5 – In the Settings dialog box:

  1. Leave the default user name of admin in the User Name text box.
  2. Enter a password in the Password and Confirm Password text boxes. The password must be 12 to 255 characters and must contain the following:
    1. At least one upper case letter
    2. At least one lower case letter
    3. At least one number
    4. At least one special character
  3. Select the Enable SSH access check box.
  4. Select the Enable auto rule generation check box.
  5. Select EMERGENCY from the Edge Control Level Logging drop-down menu.
  6. Click Next.

3

6 – In the Configure deployment dialog box:

  1. Select the data center from the Datacenter drop-down menu.
  2. Select the appropriate Appliance Size.
  3. Click the green + icon in the NSX Edge Appliances
  4. Select the cluster or resource pool from the Cluster/Resource Pool drop-down menu.
  5. Select the datastore from the Datastore drop-down menu.
  6. (Optional) Select the host from the Host drop-down menu.
  7. (Optional) Select the folder from the Folder drop-down menu.
  8. Click OK and click Next.

4-1

4-2

7 – In the Configure Interfaces dialog box:

  1. Under the Configure interfaces of this NSX Edge section, click the green + icon to create an interface.
    1. NOTE – You must add at least one internal interface for HA to work.
  2. Enter the NSX Edge interface name in the Name text box.
  3. Select Internal or Uplink as the
  4. Click Change next to the Connected To selection box to choose the appropriate logical switch, standard port group or distributed port group with which to connect the interface.
  5. Select Connected for the Connectivity Status.
  6. Assign a primary IP address and subnet prefix length.
  7. Select the appropriate options.
  8. Select Enable Proxy ARP for overlapping network forwarding between different interfaces. Select Send ICMP Redirect to convey routing information to hosts.
  9. Click OK and click Next.

5-15-2.gif

7 – In the Default gateway settings dialog box, deselect the Configure Default Gateway check box and click Next.

6

8 – In the Firewall and HA dialog box:

  1. Select the Configure Firewall default policy check box.
  2. Select Accept for Default Traffic Policy.
  3. Select Disable for Logging.
  4. (Optional) If high availability is enabled, complete the Configure HA parameters By default, HA automatically chooses an internal interface and automatically assigns link-local IP addresses.
  5. Click Next.

7

NOTE – If ANY is selected for the high availability interface but there are no internal interfaces configured, the user interface will not display an error. Two NSX Edge appliances will be created, but because there is no internal interface configured, the new NSX Edge appliances remain in standby and high availability is disabled. After an internal interface is configured, high availability will be enabled on the NSX Edge appliances.

9 – In the Ready to complete dialog box, review the configuration and click Finish.

8

Happy Learning 🙂

VM Component Protection (VMCP)

vSphere 6.0 has a powerful  feature as a part of vSphere HA called VM Component Protection (VMCP). VMCP protects virtual machines from storage related events, specifically Permanent Device Loss (PDL) and All Paths Down (APD) incidents.

Permanent Device Loss (PDL)
A PDL event occurs when the storage array issues a SCSI sense code indicating that the device is unavailable. A good example of this is a failed LUN, or an administrator inadvertently removing a WWN from the zone configuration. In the PDL state, the storage array can communicate with the vSphere host and will issue SCSI sense codes to indicate the status of the device. When a PDL state is detected, the host will stop sending I/O requests to the array as it considers the device permanently unavailable, so there is no reason to continuing issuing I/O to the device.

All Paths Down (APD)
If the vSphere host cannot access the storage device, and there is no PDL SCSI code returned from the storage array, then the device is considered to be in an APD state. This is different than a PDL because the host doesn’t have enough information to determine if the device loss is temporary or permanent. The device may return, or it may not. During an APD condition, the host continues to retry I/O commands to the storage device until the period known as the APD Timeout is reached. Once the APD Timeout is reached, the host begins to fast-fail any non-virtual machine I/O to the storage device. This is any I/O initiated by the host such as mounting NFS volumes, but not I/O generated within the virtual machines. The I/O generated within the virtual machine will be indefinitely retried. By default, the APD Timeout value is 140 seconds and can be changed per host using the Misc.APDTimeout advanced setting.

VM Component Protection (VMCP)
vSphere HA can now detect PDL and APD conditions and respond according to the behavior that you configure. The first step is to enable VMCP in your HA configuration. This settings simply informs the vSphere HA agent that you wish to protect your virtual machines from PDL and APD events.

Cluster Settings -> vSphere HA -> Host Hardware Monitoring – VM Component Protection -> Protect Against Storage Connectivity Loss.

1.GIF

The next step is configuring the way you want vSphere HA to respond to PDL and ADP events.  Each type of event can be configured independently.  These settings are found on the same window that VMCP is enabled by expanding the Failure conditions and VM response section.

2

What setting and their explanation are available on this Blog.

%d bloggers like this: