Cross-Cloud Disaster Recovery with VMware Cloud on AWS and Azure VMware Solution

Featured

Disaster Recovery is an important aspect of any cloud deployment. It is always possible that an entire cloud data center or region of the cloud provider goes down. This has already happened to most cloud providers like Amazon AWS, Microsoft Azure, Google Cloud and will surely happen again in future. Cloud providers like Amazon AWS, Microsoft Azure and Google Cloud will readily suggest that you have a Disaster Recovery and Business Continuity strategy that spans across multiple regions, so that if a single geographic region goes down, business can continue to operate from another region. This only sounds good in theory, but there are several issues in the methodology of using the another region of a single cloud provider. Some of the key reasons which I think that single cloud provider’s Cross-Region DR will not be that effective.

  • A single Cloud Region failure might cause huge capacity issues for other regions used as DR
  • Cloud regions are not fully independent , like AWS RDS allows read replicas in other regions but one wrong entry will get replicated across read replicas which breaks the notion of “Cloud regions are independent
  • Data is better protected from accidental deletions when stored across clouds. For Example what if any malicious code or an employee or cloud providers employee runs a script which deletes all the data but in most cases this will not impact cross cloud.

In this blog post we will see how VMware cross cloud disaster recovery solution can help customers/partners to overcome BC/DR challenges.

Deployment Architecture

Here is my deployment architecture and connectivity:

  • One VMware Cloud on AWS SDDC
  • One Azure VMware Solution SDDC
  • Both SDDC’s are connected over MegaPort MCR

Activate VMware Site Recovery on VMware Cloud on AWS

To configure site recovery on VMware Cloud on AWS SDDC, go to SDDC page, click on the Add Ons tab and under the Site Recovery Add On, Click the ACTIVATE button

In the pop up window Click ACTIVATE again

This will deploy SRM on SDDC, wait for it to finish.

Deploy VMware Site Recovery Manager on Azure VMware Solution

In your Azure VMware Solution private cloud, under Manage, select Add-ons > Disaster recovery and click on “Get Started”

From the Disaster Recovery Solution drop-down, select VMware Site Recovery Manager (SRM) and provide the License key, select agree with terms and conditions, and then select Install

After the SRM appliance installs successfully, you’ll need to install the vSphere Replication appliances. Each replication server accommodates up to 200 protected VMs. Scale in or scale out as per your needs.

Move the vSphere server slider to indicate the number of replication servers you want based on the number of VMs to be protected. Then select Install

Once installed, verify that both SRM and the vSphere Replication appliances are installed.After installing VMware SRM and vSphere Replication, you need to complete the configuration and site pairing in vCenter Server.

  1. Sign in to vCenter Server as cloudadmin@vsphere.local.
  2. Navigate to Site Recovery, check the status of both vSphere Replication and VMware SRM, and then select OPEN Site Recovery to launch the client.

Configure site pairing in vCenter Server

Before starting site pair, make sure firewall rules between VMware cloud on AWS and Azure VMware solution has been opened as described Here and Here

To start pairing select NEW SITE PAIR in the Site Recovery (SR) client in the new tab that opens.

Enter the remote site details, and then select FIND VCENTER SERVER INSTANCES and select then select Remote vCenter and click on NEXT, At this point, the client should discover the VRM and SRM appliances on both sides as services to pair.

Select the appliances to pair and then select NEXT.

Review the settings and then select FINISH. If successful, the client displays another panel for the pairing. However, if unsuccessful, an alarm will be reported.

After you’ve created the site pairing, you can now view the site pairs and other related details as well as you are ready to plan for Disaster Recovery.

Planning

Mappings allow you to specify how Site Recovery Manager maps virtual machine resources on the protected site to resources on the recovery site, You can configure site-wide mappings to map objects in the vCenter Server inventory on the protected site to corresponding objects in the vCenter Server inventory on the recovery site.

  • Network Mapping
  • IP Customization
  • Folder Mapping
  • Resource Mapping
  • Storage Policy Mapping
  • Placeholder Datastores

Creating Protection Groups

A protection group is a collection of virtual machines that the Site Recovery Manager protects together. Protection group are per SDDC configuration and needs to be created on each SDDC if VMs are replicated in bi-directionally.

Recovery Plan

A recovery plan is like an automated run book. It controls every step of the recovery process, including the order in which Site Recovery Manager powers on and powers off virtual machines, the network addresses that recovered virtual machines use, and so on. Recovery plans are flexible and customizable.

A recovery plan runs a series of steps that must be performed in a specific order for a given workflow such as a planned migration or re-protection. You cannot change the order or purpose of the steps, but you can insert your own steps that display messages and run commands.

A recovery plan includes one or more protection groups. Conversely, you can include a protection group in more than one recovery plan. For example, you can create one recovery plan to handle a planned migration of services from the protected site to the recovery site for the whole SDDC and another set of plans per individual departments. Thus, having multiple recovery plans referencing one protection group allows you to decide how to perform recovery.

Steps to add a VM for replication:

there are multiple ways, i am explaining here one:

  • Choose VM and right click on it and select All Site Recovery actions and click on Configure Replication
  • Choose Target site and replication server to handle replication
  • VM validation happens and then choose Target datastore
  • under Replication setting , choose RPO, point in time instances etc..
  • Choose protection group to which you want to add this VM and check summary and click Finish

Cross-cloud disaster recovery ensures one of the most secure and reliable solutions for service availability, reason cross-cloud disaster recovery is often the best route for businesses is that it provides IT resilience and business continuity. This continuity is of most important when considering how companies operate, how customers and clients rely on them for continuous service and when looking at your company’s critical data, which you do not want to be exposed or compromised.

Frankly speaking IT disasters happen and happens everywhere including public clouds and much more frequently than you might think. When they occur, they present stressful situations which require fast action. Even with a strategic method for addressing these occurrences in place, it can seem to spin out of control. Even when posed with these situations, IT leaders must keep face, remain calm and be able to fully rely on the system they have in place or partner they are working with for disaster recovery measures.

Customer/Partner with VMware Cloud on AWS and Azure VMware Solution can build cross cloud disaster recovery solution to simplify disaster recovery with the only VMware-integrated solution that runs on any cloud. VMware Site Recovery Manager (SRM) provides policy-based management, minimizes downtime in case of disasters via automated orchestration, and enables non-disruptive testing of your disaster recovery plans.

Advertisement

Tanzu on Azure Native

Featured

VMware Tanzu Kubernetes Grid provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate that is ready for end-user workloads and ecosystem integrations. You can deploy Tanzu Kubernetes Grid across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. in this blog post we will deploy Tanzu Kubernetes Grid 1.4 to Azure Native VMs.

Pre-requisite

  • Deploy a Client VM ,my case ubuntu VM, this VM will be the bootstrap VM for Tanzu, from where we will deploy management cluster in Azure, you can have one native Azure VM for this.
  • TKG uses a local Docker install to setup a small, ephemeral, temporary kind based Kubernetes cluster to build the TKG management cluster in Azure. you need Docker locally to run the kind cluster.
  • Download and Unpack the “Tanzu CLI” and “Kubectl” from HERE, on above VM in a new directory named "tkg" or "tanzu"
    • unpack using #> tar -xvf tanzu-cli-bundle-v1.4.0-linux-amd64.tar
    • After you unpack the bundle file, in your folder, you will see a cli folder with multiple subfolders and files
    • Install Tanzu CLI using #> sudo install core/v1.4.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
  • Unpack the kubectl binary using:
    • #> tar -xvf kubectl-linux-v1.21.2+vmware.1.gz
    • Install kubectl using #> sudo install kubectl-linux-v1.21.2+vmware.1 /usr/local/bin/kubectl
  • Run the following command from the tanzu directory to install all the Tanzu plugins:
    • #> tanzu plugin install –local cli all
    • #> tanzu plugin list

Configure Azure resources

In this section we will prepare Microsoft Azure for running Tanzu Kubernetes Grid, For the networking, i have prepared azure network as below:

  • Every Tanzu Kubernetes Grid cluster requires 2 Public IP addresses
  •  For each Kubernetes Service object with type LoadBalancer, 1 Public IP address is required.
  • A VNET with: (only required if using existing VNET else TKG can create all automatically)
    • A subnet for the management cluster control plane node
    • A Network Security Group on the control plane subnet with Allow TCP over port 22 and 6443 for any source and destination inbound security rules, to enable SSH and Kubernetes API server connections
    • One additional subnet and Network Security Group for the management cluster worker nodes.
  • Below is the high level network topology will look like when we deploy Tanzu Management Cluster:

Get Tenant ID

Make a note of the Tenant ID value as it will be used later by hovering over your account name at upper-right, or else browse to Azure Active Directory > <Your Org> > Properties > Tenant ID

Register Tanzu Kubernetes Grid as an Azure Client App & Get Application (Client) ID

Tanzu Kubernetes Grid manages Azure resources as a registered client application that accesses Azure through a service principal account. Below steps register your Tanzu Kubernetes Grid application with Azure Active Directory, create its account, create a client secret for authenticating communications, and record information needed later to deploy a management cluster.

  • Go to Active Directory > App registrations and click on + New registration.
  • Enter name and select who else can use it. leave Redirect URI (optional) field blank.
  • Click Register. This registers the application with an Azure service principal account

Make a note of the Application (client) ID value, we will use it later.

Get Subscription ID

From the Azure Portal top level, browse to Subscriptions. At the bottom of the pane, select one of the subscriptions you have access to, and make a note of Subscription ID.

Add a Role, Create and Record Secret ID

Click the subscription listing to open its overview pane and Select to Access control (IAM) and click Add a role assignment.

  • In the Add role assignment pane
    • Select the Owner role
    • Select to “user, group, or service principal
    • Under Select enter the name of your app, in my case “avnish-tkg”. It appears under Selected Members
  • Click Save. A popup appears confirming that your app was added as an owner for your subscription. You can also verify by going in to “Owned Application” section.
  • On the Azure Portal go to Azure Active Directory, click on App Registrations, select your “avnish-tkg” app under Owned applications.
  • Go to Certificates & secrets then in Client secrets click on New client secret.
  • In the Add a client secret popup, enter a Description, choose an expiration period, and click Add.
  • Azure lists the new secret with its generated value under Client Secrets. take a note of “Client Secret” value, which we will use later.

With All above steps, we will have four recorded values:

Subscription ID – XXXXXXX-XXX-4853-9cff-3d2d25758b70
Application Client ID – XXXXXXXX-6134-xxxx-b1a9-8fcbfd3ea189
Secret Value – XXXXX-xxxxxxxxxxxxxxxkB3VdcBF.c_C.
Tenant ID – XXXXXXXX-3cee-4b4a-a4d6-xxxxxdd62f0

we will use these values when we will create management cluster.

Create an SSH Key-Pair

To connect to management azure machine we must need to provide the public key part of an SSH key pair. you can use a tool "ssh-keygen" to generate one

  • #>ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default
  • Enter and repeat a password for the key pair

Copy the content of .ssh/id_rsa.pub, which we will use in next section.

Accept Base VM image license

To run management cluster VMs on Azure, accept the license for their base Kubernetes version and machine OS by logging to azure cell and run below commands:

#> az login --service-principal --username AZURE_CLIENT_ID --password AZURE_CLIENT_SECRET --tenant AZURE_TENANT_ID

AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID are your avnish-tkg app's client ID and secret and your tenant ID, as recorded above

#> az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot21dot2-ubuntu-2004 --subscription AZURE_SUBSCRIPTION_ID

In Tanzu Kubernetes Grid v1.4.0, the default cluster image --plan value is k8s-1dot21dot2-ubuntu-2004.

Start the Installer Interface

On the machine on which you downloaded and installed the Tanzu CLI, run the

#> tanzu management-cluster create -b "IP of this machine:port" -u

b = binding with interface
u = for User Interface

On the local machine, browse to the above machine’s IP address to access the installer interface and then choose “Microsoft Azure” and click on “DEPLOY”

In the IaaS Provider section, enter the Tenant IDClient IDClient Secret, and Subscription ID values for your Azure account. we recorded these values in above pre-requisite section.

  • Click Connect. The installer verifies the connection and changes the button label to Connected.
  • Select the Azure region in which to deploy the management cluster.
  • Paste the contents of your SSH public key, ".ssh/id_rsa.pub“, into the text box.
  • Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.

In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button, in my case i am using existing VNET and Subnets which i had provisioned already.

To make the management cluster private, enable the Private Azure Cluster checkbox or leave it Untick. By default, Azure management and workload clusters are public. But you can also configure them to be private, which means their API server uses an Azure internal load balancer (ILB) and is therefore only accessible from within the cluster’s own VNET or peered VNETs.

In Management Cluster Settings section choose Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.

Under Worker Node Instance Type, select the configuration for the worker node VM

In the optional Metadata section, optionally provide descriptive information about this management cluster.

in Kubernetes Network section check the default Cluster Service CIDR and Cluster Pod CIDR ranges. If these CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are not available, change the values under Cluster Service CIDR and Cluster Pod CIDR.

In Identity Management section, enable/disable Enable Identity Management Settings based on your use case.

In OS Image section, select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next

In the Registration URL field, copy and paste the registration URL you obtained from Tanzu Mission Control or if you don’t have TMC access or URL, move next you can register it later if you want.

In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program and then Click Review Configuration to see the details of the management cluster that we have configured.

and finally click on Deploy Management Cluster.

Deployment of the management cluster can take several minutes, in my case around 12 minutes.

and finally, once everything is deployed and configured. your “Management Cluster Created”

The installer saves the configuration of the management cluster to ~/.config/tanzu/tkg/clusterconfigs with a generated filename of the form UNIQUE-ID.yaml. After the deployment has completed, you can rename the configuration file to something memorable, 

if you go to azure portal, you should see three control plane VMs, for with names similar to CLUSTER-NAME-control-plane-abcde, one or three worker node VMs with name similar to CLUSTER-NAME-md-0-rh7xv, Disk and Network Interface resources for the control plane and worker node VMs, with names based on the same name patterns.

At this point we can see management cluster deployed successfully, now we can go ahead and create workload clusters based on your requirement easily. Tanzu allows to deploy and manage Kubernetes cluster on vSphere, on VMware on public clouds and native public cloud AWS and Azure native, in next post i will deploy Tanzu on AWS.

Integrate Azure Files with Azure VMware Solution

Featured

Azure VMware Solution is a VMware validated solution with on-going validation and testing of enhancements and upgrades. Microsoft manages and maintains private cloud infrastructure and software. It allows customers to focus on developing and running workloads in your private clouds.

In this blog post I will be configuring Virtual Machines running on VMware Azure Solution can access Azure files over azure private end point. This is a end to end four step process describe as below:

and explained in this video:

Here is Step-by-Step process of configuring and accessing Azure Files on Azure VMware Solution:

Step -01 Deploy Azure VMware SDDC

Azure VMware Solution provides customers a private clouds that contain vSphere clusters, built on dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, and additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. customers can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds.

In this blog, i am not going to cover AVS deployment process as this blog post is more focused on Azure Files integration, you can follow below process for the deployment of VMware Azure solution or check official documentation Here

Step -02 Create ExpressRoute to Connect to Azure Native Services

In this section we need to defined whether to use an existing or new ExpressRoute virtual network gateway and follow this decision tree for your AVS to Azure Native Services configuration.

Diagram showing the workflow for connecting Azure Virtual Network to ExpressRoute in Azure VMware Solution.

For this blog post, I will create a new vNET and new ExpressRoute and attach that vNET to Azure VMware Solution, so first thing first, Deploy your Azure VMware Solution and once done, go ahead and create an Azure Virtual Network and Virtual Network Gateway

Create Azure virtual network
  • On the Virtual Network page, select Create to set up your virtual network for your private cloud.
  • On the Create Virtual Network page, enter the details for your virtual network.
  • On the Basics tab, enter a name for the virtual network, select the appropriate region, and select Next
  • IP Addresses. (NOTE:-You must use an address space that does not overlap with the address space you used when you created your private cloud), Select + Add subnet, and on the Add subnet page, give the subnet a name and appropriate address range. When complete, select Add.
  • Select Review + create.
create AN virtual network gateway

Now that we have created a virtual network, we will now create a virtual network gateway.On the Virtual Network gateway page, select Create. On the Basics tab of the Create virtual network gateway page, provide values for the fields, and then select Review + create.

SubscriptionPre-populated value with the Subscription to which the resource group belongs.
Resource groupPre-populated value for the current resource group. Value should be the resource group you created in a previous test.
NameEnter a unique name for the virtual network gateway.
RegionSelect the geographical location of the virtual network gateway as AVS
Gateway typeSelect ExpressRoute.
SKULeave the default value: standard.
Virtual networkSelect the virtual network you created previously. If you don’t see the virtual network, make sure the gateway’s region matches the region of your virtual network.
Gateway subnet address rangeThis value is populated when you select the virtual network. Don’t change the default value.
Public IP addressSelect Create new.
Connect ExpressRoute to the virtual network gateway

Let’s go on the Azure portal, navigate to the Azure VMware Solution private cloud and select Manage > Connectivity > ExpressRoute and then select + Request an authorization key.

Provide a name for it and select Create. It may take about 30 seconds to create the key. Once created, the new key appears in the list of authorization keys for the private cloud.

Copy the authorization key and ExpressRoute ID. we will need them to complete the peering, now navigate to the virtual network gateway and select Connections > + Add

On the Add connection page, provide values for the fields, and select OK.

FieldValue
NameEnter a name for the connection.
Connection typeSelect ExpressRoute.
Redeem authorizationEnsure this box is selected
Virtual network gatewayThe virtual network gateway that we deployed above
Authorization keyPaste the authorization key copied in previous step
Peer circuit URIPaste the ExpressRoute ID copied in previous step

The connection between your ExpressRoute circuit and your Virtual Network is created successfully.

To test connectivity,I have deployed a VM on Azure VMware Solution and one VM in Native Azure and I can reach to both the VMs. I took console of AVS VM and can RDP to Azure Native VM and ping from Azure Native VM to VM deployed in AVS, this ensures that now we have successfully established connectivity between AVS and Azure Native.

Step -03 Create Storage and File Shares

Now lets move to Step -03, Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares.

Once you are on Azure portal, Click on “Create” under Storage account and create an storage account in the same region where we have Azure VMware Solution deployed. We will use this storage account to configure “Azure Files” over private link.

Once Storage Account is created, Lets get in to the networking section of storage account, which allows you to configure networking options. In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or more private endpoints.

A private endpoint is an endpoint that is only accessible within an Azure virtual network and by AVS Network. When you create a private endpoint for your storage account, your storage account gets a private IP address from within the address space of your virtual network, much like how an on-premises file server or NAS device receives an IP address within the dedicated address space of your on-premises network.

Let’s create a “Private EndPoint by clicking on “+ Private endpoint”

Enter basic information as well choose “Region” should be in same region where your AVS has been deployed.

On the Next screen ensure to choose “Target sub-resource” – “file”

Select Azure Virtual Network and subnet that we created in step -02 and click on create

Once in the storage account, select the File shares and click on “+ File share”. The new file share blade should appear on the screen. Complete the fields in the new file share blade to create a file share:

  • Name: the name of the file share to be created.
  • Quota: the quota of the file share for standard file shares; the provisioned size of the file share for premium file shares.
  • Tiers: the selected tier for a file share. 

Now share is created, to Mount this share, Select File shares, which we need to mount and then click on “Connect

Select the drive letter to mount the share to ,choose Authentication method and Copy the provided script.

Step -04 Access Files over SMB

On the windows server which is running on Azure VMware Solution, Paste the script into a shell on the host you’d like to mount the file share to, and run it.

This should mount the “Azure File” to your windows server as Z: drive, which you can use to transfer/store any data that you want to transfer/store.

In case if you are facing issue while accessing file share using Host DNS Name, take private IP of the share connection by clicking on “Network Interface” and copy the private IP

Add this private IP Address in to windows servers Hosts file, then it should work as expected.

This completes integration of Azure VMware Solution to Azure Files (which is azure native service) over the private link, similarly Customers can use many more services of Azure Native those can be easily integrated with Azure VMware Solution.