Terraform often requires network line-of-sight in order to manage Azure service data planes like Azure Storage and KeyVault once public network access has been disabled. Which, if you are in the enterprise, it probably should be. This situation can also arise when creating a Terraform double-decker — or triple-decker — sandwich using other Terraform providers in order to manage a composite infrastructure stack. Good examples of this are Azure/Grafana, Azure/Kubernetes, Azure/ADX/Grafana, etc. — the list could go on and on with the hugely diverse set of available Terraform providers out there.

Therefore, when operating infrastructure-as-code at scale [with Terraform] you will very likely need to setup private network access for the machines that are running Terraform. This would likely include build agents for whatever you are using for a pipeline tool — whether its Azure DevOps, GitHub Actions or God forbid, Jenkins.

Using a service like Virtual WAN — or V-WAN, for short — can be extremely useful as it provides instance lubrication to getting your Virtual Networks connected with each other. Hence, the VNet that your Azure DevOps custom hosted pool lives on can get connected to the same VNet that your Azure Storage is private linked to which allows you to start provisioning Azure Storage data plane resources like containers, blobs, tables and queues without getting shutdown.

A Virtual WAN is super easy to setup but because it acts as a foundational piece of infrastructure you want to provision in as isolated way as possible such that entanglements with other infrastructure is kept to a minimum. I would recommend planning your V-WAN and provisioning it along with regional Virtual Hubs — the tentacles that project your V-WAN across multiple Azure regions in a single project.

resource "azurerm_virtual_wan" "main" {
  name                = "vwan-${var.name}"
  resource_group_name = var.resource_group_name
  location            = var.location
}

Like Azure Front Door or Cosmos DB, the V-WAN is provisioned to a primary region but it is a global service. The Virtual Hubs provide presence in the respective regions.

I like to create a well known “Primary Hub”. This is a useful location to put other pieces of core infrastructure but layered on using other Terraform workspaces.

IMAGE V-WAN is fun AND easy! ^_^

The above diagram illustrates the architecture of this approach. It’s pretty simple and once you have it setup it makes for a very quick and easy way to extend out to additional Azure Regions.

resource "azurerm_virtual_hub" "primary" {
  name                = "vhub-${var.name}-primary"
  resource_group_name = var.resource_group_name
  location            = var.location
  virtual_wan_id      = azurerm_virtual_wan.main.id
  address_prefix      = var.primary_address_prefix
}

Each Virtual Hub must have some network address space. This is needed for it to perform its function as acting as a transit network in your Hub-and-Spoke network but also to facilitate placement of other core network services that might sit at the regional edge of your networks like firewalls and such. The minimum required space

According to official documentation the minimum space required is a standard /24 , however, the recommended space is /23 which contains 512 IP addresses. This is my recommendation as well.

The recommended Virtual WAN hub address space is /23. Virtual WAN hub assigns subnets to various gateways (ExpressRoute, site-to-site VPN, point-to-site VPN, Azure Firewall, Virtual hub Router). For scenarios where NVAs are deployed inside a virtual hub, a /28 is typically carved out for the NVA instances. However if the user were to provision multiple NVAs, a /27 subnet might be assigned. Therefore, keeping a future architecture in mind, while Virtual WAN hubs are deployed with a minimum size of /24, the recommended hub address space at creation time for user to input is /23.

We can use our Terraform TFVAR file to pass in the configuration for our environment. You should use your organization’s IPAM tool or work with your Networking Governance Body in order to procure an unused address space. primary_address_space = “10.38.0.0/23”

We can define additional regions as a map associated with the Azure regions that we want to deploy across.

additional_regions = {
  eastus2    = "10.38.4.0/23"
  westus3    = "10.38.6.0/23"
  westeurope = "10.38.8.0/23"
}

Finally, we simply iterate across this map in order to provision the Virtual Hubs to extand our V-WAN across all the desired Azure regions.

resource "azurerm_virtual_hub" "additional_regions" {

  for_each = var.additional_regions

  name                = "vhub-${var.name}-${each.key}"
  resource_group_name = var.resource_group_name
  location            = each.key
  virtual_wan_id      = azurerm_virtual_wan.main.id
  address_prefix      = each.value

}

I wrote a module that does this auto-magically. You can check it out here:

module "vwan" {
  source  = "markti/azure-terraformer/azurerm//modules/network/vwan"
  version = "1.0.15"
  resource_group_name    = azurerm_resource_group.main.name

  location               = azurerm_resource_group.main.location
  name                   = "${var.application_name}-${var.environment_name}"
  primary_address_prefix = var.primary_address_space
  additional_regions     = var.additional_regions

}

However, I might be moving it out of my markti namespace on the Terraform Registry to a more permanent home under the azure-terraformer namespace. In order to use the module you simply supply the primary address space and a map for the regions and address spaces that you so desire.

My module takes care of the rest from there! Let me know what you think! Until then — Happy Azure Terraforming!