Skip to content

Commit

Permalink
Marketing-aligned diagrams for dual-region guidance (Azure#216)
Browse files Browse the repository at this point in the history
* Incorporating PG feedback

* Updated with PG feedback

* Feedback from Mahesh

* Updated diagrams

* Additional diagram updates

* Addtional diagram updates

* Additional digram updates

* Updated text
  • Loading branch information
fguerri authored Feb 8, 2023
1 parent 6b06893 commit b905f49
Show file tree
Hide file tree
Showing 8 changed files with 18 additions and 16 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -9,53 +9,53 @@ This article focuses on a typical dual-region scenario, shown in Figure 1 below:
- An AVS private cloud has been deployed in each region.

![figure1](media/dual-region-fig1.png)
*Figure 1. Dual-region scenario. This article discusses options for connecting AVS private clouds to Azure VNets, on-prem sites and the internet in such a way that, in case of partial or complete regional disasters, the surviving components (AVS private clouds, Azure-native resources, on-prem sites) to maintain connectivity with each other and the internet.*
*Figure 1. Dual-region scenario. This article discusses options for connecting AVS private clouds to Azure VNets, on-prem sites and the internet in such a way that, in case of partial or complete regional disasters, the surviving components (AVS private clouds, Azure-native resources, on-prem sites) maintain connectivity with each other and the internet.*

> [!NOTE]
> In the reference scenario of Figure 1, the two regional hub VNets are connected via global VNet peering. While not strictly required (traffic between Azure VNets in the two regions could be routed over Expressroute connections), this configuration is strongly recommended. Private Peering minimizes latency and maximizes throughput, as it removes the need to hairpin traffic through the Expressroute meet-me edge routers.
> In the reference scenario of Figure 1, the two regional hub VNets are connected via global VNet peering. While not strictly required (traffic between Azure VNets in the two regions could be routed over Expressroute connections), this configuration is strongly recommended. VNet Peering minimizes latency and maximizes throughput, as it removes the need to hairpin traffic through the Expressroute meet-me edge routers.
The next sections describe the AVS network configuration that is required to enable, in the reference dual-region scenario, the following commuication patterns:
The next sections describe the AVS network configuration that is required to enable, in the reference dual-region scenario, the following communication patterns:
- AVS to AVS (covered in the section "AVS cross-region connectivity");
- AVS to on-prem sites connected over ExpressRoute (covered in the section "Hybrid connectivity");
- AVS to Azure Virtual Networks (covered in the section "Azure Virtual Networks connectivity");
- AVS to internet (covered in the section "Internet connectivity").

## AVS cross-region connectivity
When multiple AVS private clouds exist, layer-3 connectivity among them is often a requirement, for example to support data replication.
AVS natively supports direct connectivity between two private clouds deployed in different Azure regions. Private clouds connect to the Azure network in their own region through Expressroute circuits, managed by the platform and terminated on dedicated ER meet-me locations. Throughout this article, these circuits are referred to as “AVS-managed circuits”. They should not be confused with the normal circuits that customers deploy to connect their on-prem sites to Azure which will be referred to as “customer-managed circuits” (see Figure 2).
AVS natively supports direct connectivity between two private clouds deployed in different Azure regions. Private clouds connect to the Azure network in their own region through Expressroute circuits, managed by the platform and terminated on dedicated ER meet-me locations. Throughout this article, these circuits are referred to as “AVS managed circuits”. They should not be confused with the normal circuits that customers deploy to connect their on-prem sites to Azure which will be referred to as “customer managed circuits” (see Figure 2).
Direct connectivity between private clouds is based on [Expressroute Global Reach](https://learn.microsoft.com/en-us/azure/expressroute/expressroute-global-reach) connections between AVS managed circuits, as shown by the green line in the diagram below. Please refer to the [official documentation](https://learn.microsoft.com/en-us/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud) for more information (the article describes the procedure for connnecting an AVS managed circuit with a customer-managed circuit; The same procedure applies to connecting two AVS managed circuits).

![figure2](media/dual-region-fig2.png)
*Figure 2. AVS private clouds in different regions directly connected to each other by establishing a Global Reach connection (green line) between the private clouds’ managed ER circuits. In each Azure region where AVS is available, network infrastructure that terminates the AVS-side of the AVS managed circuits is present. It is referred to as “Dedicated ER meet-me routers” in the picture.*
*Figure 2. AVS private clouds in different regions directly connected to each other over a Global Reach connection (green line) between the private clouds’ managed ER circuits. In each Azure region where AVS is available, network infrastructure that terminates the AVS side of the AVS managed circuits is present. It is referred to as “Dedicated ER meet-me location” in the picture.*

## Hybrid connectivity
The recommended option for connecting AVS private clouds to on-prem sites is Expressroute Global Reach. Global Reach connections can be established between customer-managed Expressroute circuits and AVS-managed Expressroute circuits. Global Reach connections are not transitive, therefore a full mesh (each AVS-managed circuit connected to each customer-managed circuit) is required for disaster resiliency, as shown in Figure 3 below (orange lines).
The recommended option for connecting AVS private clouds to on-prem sites is Expressroute Global Reach. Global Reach connections can be established between customer managed Expressroute circuits and AVS managed Expressroute circuits. Global Reach connections are not transitive, therefore a full mesh (each AVS managed circuit connected to each customer managed circuit) is required for disaster resilience, as shown in Figure 3 below (orange lines).

![figure3](media/dual-region-fig3.png)
*Figure 3. Global Reach connections (orange lines) can be established between the customer-managed Expressroute circuits and the AVS-managed Expressroute circuits.*
*Figure 3. Global Reach connections (orange lines) can be established between customer managed Expressroute circuits and AVS managed Expressroute circuits.*

## Azure Virtual Networks connectivity
Azure VNets can be connected to AVS private clouds through connections between Expressroute Gateways and AVS-managed circuits (i.e. exactly in the same way Azure VNets can be connected to on-prem sites over customer-managed Expressroute circuits). Please review the [AVS official documentation](https://learn.microsoft.com/en-us/azure/azure-vmware/tutorial-configure-networking#connect-to-the-private-cloud-manually) for configuration instructions.
In dual-region scenarios, a full mesh is recommended for the ER connections between the two regional hub VNets and private clouds, as shown in Figure 4 (yellow lines).
Azure VNets can be connected to AVS private clouds through connections between Expressroute Gateways and AVS managed circuits (i.e. exactly in the same way Azure VNets can be connected to on-prem sites over customer managed Expressroute circuits). Please review the [AVS official documentation](https://learn.microsoft.com/en-us/azure/azure-vmware/tutorial-configure-networking#connect-to-the-private-cloud-manually) for configuration instructions.
In dual region scenarios, a full mesh is recommended for the ER connections between the two regional hub VNets and private clouds, as shown in Figure 4 (yellow lines).

![figure4](media/dual-region-fig4.png)
*Figure 4. By connecting each hub VNet’s Expressroute Gateway to each AVS private cloud’s managed Expressroute circuit (yellow lines), Azure native resources in each region have direct L3 connectivity to AVS private clouds.*
*Figure 4. By connecting each hub VNet’s Expressroute Gateway to each AVS private cloud’s managed Expressroute circuit (yellow lines), Azure native resources in each region have direct L3 connectivity to AVS private clouds (the global VNet peering connection between the two hub VNets, shown in the previous diagrams, has been omitted for clarity).*

## Internet connectivity
When deploying AVS private clouds in multiple regions, native options (managed SNAT or Public IPs down to the NSX-T) are recommended. Either option can be configured through the Azure portal (or via PowerShell, CLI or ARM/Bicep templates) at deployment time, as shown in Figure 5 below.
When deploying AVS private clouds in multiple regions, native options for internet connectivity (managed SNAT or Public IPs down to the NSX-T) are recommended. Either option can be configured through the Azure portal (or via PowerShell, CLI or ARM/Bicep templates) at deployment time, as shown in Figure 5 below.

![figure5](media/dual-region-fig5.png)
*Figure 5. AVS native options for outbound internet connectivity in the Azure portal.*
*Figure 5. AVS native options for internet connectivity in the Azure portal.*

Both the options highlighted in Figure 5 provide each private cloud with a direct internet breakout in its own region. The following considerations should inform the decision as to which native internet connectivity option to use:
- Managed SNAT should be used in scenarios with basic and outbound-only requirements (low volumes of outbound connections and no need for granular control over the SNAT pool).
- Public IPs down to the NSX-T edge should be preferred in scenarios with large volumes of outbound connections or when granular control over NAT IP addresses (i.e. which AVS VMs get SNAT’ted behind which IP addresses) is required. Public IPs down to the NSX-T edge also support inbound connectivity via DNAT. Inbound internet connectivity is not covered in this article.

Changing a private cloud’s internet connectivity configuration after initial deployment is possible, but the private cloud will lose connectivity to internet, Azure VNets and on-prem sites while the configuration is being updated. When either one of the native internet connectivity options above (Figure 5) is used, no additional configuration is required in dual-region scenarios (the topology stays the same as the one shown in Figure 4). For more information on internet connectivity for AVS, please review the [AVS official documentation](https://learn.microsoft.com/en-us/azure/azure-vmware/concepts-design-public-internet-access).
Changing a private cloud’s internet connectivity configuration after initial deployment is possible, but the private cloud will lose connectivity to internet, Azure VNets and on-prem sites while the configuration is being updated. When either one of the native internet connectivity options above (Figure 5) is used, no additional configuration is required in dual region scenarios (the topology stays the same as the one shown in Figure 4). For more information on internet connectivity for AVS, please review the [AVS official documentation](https://learn.microsoft.com/en-us/azure/azure-vmware/concepts-design-public-internet-access).

### Azure-native internet breakout
If a secure internet edge was built in Azure VNets prior to AVS adoption, it may be required (centralized management of network security policies, cost optimization, …) to leverage it for internet access for private clouds. Internet security edges in Azure VNets can be implemented using Azure Firewall or third-party firewall/proxy NVAs available on the Azure Marketplace.
AVS allows customers to attract internet-bound traffic emitted by virtual machines running on AVS, by originating a default route from the Azure VNet and announcing it, over BGP, to the private cloud’s managed ER circuit. This internet connectivity option can be configured through the Azure portal (or via PowerShell, CLI or ARM/Bicep templates) at deployment time, as shown in Figure 6 below.
If a secure internet edge was built in Azure VNets prior to AVS adoption, it may be required (centralized management of network security policies, cost optimization, …) to leverage it for internet access for AVS private clouds. Internet security edges in Azure VNets can be implemented using Azure Firewall or third-party firewall/proxy NVAs available on the Azure Marketplace.
Internet-bound traffic emitted by AVS virtual machines can be attracted to an Azure VNet by originating a default route and announcing it, over BGP, to the private cloud’s managed ER circuit. This internet connectivity option can be configured through the Azure portal (or via PowerShell, CLI or ARM/Bicep templates) at deployment time, as shown in Figure 6 below.

![figure6](media/dual-region-fig6.png)
*Figure 6. AVS configuration to enable internet connectivity via internet edges hosted in Azure VNets.*
Expand All @@ -68,7 +68,9 @@ The key consideration in dual-region scenarios is that the default route origina

Removing the AVS cross-region ER connections achieves the goal of injecting, in each private cloud, a default route to forward internet-bound connections to the Azure internet edge in the local region.

It should be noted that, if the cross-region ER connections (red, dashed lines in Figure 7) are removed, cross-region propagation of the default route will still occur over Global Reach. However, routes propagated over Global Reach have a longer AS Path than the locally originated ones and get discarded by the BGP route selection process. The cross-region propagation over Global Reach of a less preferred default route provides resiliency against faults of the local internet edge. If a region’s internet edge goes offline, it stops originating the default route, in which case the less-preferred default route learned from the remote region gets installed in the AVS private cloud, so that internet-bound traffic is routed via the remote region’s breakout.
It should be noted that, if the cross-region ER connections (red, dashed lines in Figure 7) are removed, cross-region propagation of the default route still occurs over Global Reach. However, routes propagated over Global Reach have a longer AS Path than the locally originated ones and get discarded by the BGP route selection process.

The cross-region propagation over Global Reach of a less preferred default route provides resiliency against faults of the local internet edge. If a region’s internet edge goes offline, it stops originating the default route, in which case the less-preferred default route learned from the remote region gets installed in the AVS private cloud, so that internet-bound traffic is routed via the remote region’s breakout.

The recommended topology for dual-region deployments with internet breakouts in Azure VNets is shown in Figure 8 below.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit b905f49

Please sign in to comment.