One of our customers approached Aviatrix in search of a high-performance encryption solution for their on-premise data centers and AWS. They were impressed with Aviatrix’s features, including visibility, a dedicated data plane, high-throughput encryption, and Terraform capability. However, they also had sister business entities still using AWS TGW, and didn’t want to spend too much time trying to convince them to switch to Aviatrix. That’s when they turned to us for a hybrid architecture solution.
The hybrid connection between AWS TGW and Aviatrix Transit
- The connection between AWS TGW and Aviatrix Transit can be either TGW Connect, which utilizes multiple BGP over GRE tunnels. Reference my previous blog post: Using AWS TGW Connect with Aviatrix Transit to build GRE tunnels
- Or the connection can be established via Aviatrix’s Orchestrated TGW workflow, which provides a seamless solution without the GRE throughput limitation. With Aviatrix’s controller, the AWS TGW route tables, VPC route tables, attachments, segmentations, and routing into and out of the security VPC for firewall inspections can be easily managed. Aviatrix Transit VPC is attached to the TGW, and the controller will monitor the health of the Aviatrix Transit Gateways and update the route table to ensure high availability, even if one of the Transit Gateways goes offline. Aviatrix’s Orchestrated TGW workflow provides a BGP like Software Defined Networking behavior, however, Aviatrix’s Orchestrated TGW is considered as legacy feature, and most of our customers are moving away from Aviatrix’s Orchestrated TGW into Aviatrix Transit and Spoke architecture.
- It is important to note that this hybrid architecture comes with increased complexity, reduction of visibility, increase operational difficulty, and will not support important Aviatrix features in the future. Currently, SmartGroups (Micro-Segmentation), Cost analysis, and Distributed Firewalling, Advanced NAT are not be available in this hybrid architecture.
The connection between Aviatrix Transit and On-premise data centers
- Customer required high performance encryption from their data centers to AWS via either Direct Connect or Internet.
- Customer already have managed ESXi infrastructure in their data centers
- Customer is looking to gain visibility and advanced NAT feature on Edge
- As such we have selected Edge 2.0 as it operates similar to an Aviatrix spoke gateway (Advanced NAT features, Controller managed routing, visibility via CoPilot, able to run on hypervisor)
- Edge 1.0 is similar to Managed CloudN, they both are considered as external BGP device from Aviatrix Transit prospective, and won’t provide the visibility and NAT feature as Edge 2.0. Edge 1.0 was developed for customers needing CloudN capability but want it to run on hypervisor.
Edge architecture in AWS highlights
You may have noticed the Edge 2.0 architecture is very similar to CloudN’s architecture mentioned in my previous blog post: Direct Connect to Aviatrix Transit – Option 3, the key take-aways were:
- AWS VPN Gateway (VGW) is attached to Aviatrix Transit VPC
- Aviatrix Transit Gateway eth0 subnet route table will have Route Propagation enabled to receive routes from VGW
- VGW is optionally attached to Direct Connect Gateway (DXGW). DXGW gives you the flexibility to connect to VGW on any region, as well as the ability to filter the routes sending to on-premise.
- If DXGW is been used, DXGW will connect to AWS Direct Connect virtual interface (VIF), and DXGW’s AS number will be used for peering
- If DXGW is not been used, then VGW need it’s dedicated VIF, which must belong to the same region. The VGW AS number will be used for peering.
- On-premise router will perform BGP peering with either DXGW or VGW via VIF, the on-premise route will learn Aviatrix Transit VPC CIDR, as well as advertise subnet range that will be used by Aviatrix Edge back to Aviatrix Transit VPC.
- Above steps would create an underlay between Aviatrix Transit VPC and Edge WAN interface, where Aviatrix Edge would then be able to use it’s WAN interface to create High Performance Encryption (HPE), multiple BGP over IPSec tunnels with Aviatrix Transit Gateways. This is considered as overlay.
- Aviatrix Edge then would use it’s LAN interface to exchange routes learned from overlay to on-premise router it peered with.
Edge 2.0 gateway registration
You can register the Edge 2.0 gateway in either Aviatrix Controller, or Aviatrix CoPilot, or Terraform
Register Edge 2.0 gateway in Controller
Multi-Cloud Transit -> Setup -> Spoke -> Cloud Type: Aviatrix
You can use the same Site ID for multiple Aviatrix Edge gateways that’s in the same site
Then specify MGMT/WAN/LAN interface information, and indicate if the Edge will be using private connection or Internet as underlay
Register Edge 2.0 gateway in CoPilot
AirSpace -> Gateways -> Edge Gateways -> + Edge Gateways
You can use the same Site ID for multiple Aviatrix Edge gateways that’s in the same site
Then specify MGMT/WAN/LAN interface information, and indicate if the Edge will be using private connection or Internet as underlay
Register Edge 2.0 gateway in terraform
You can use terraform aviatrix_edge_spoke resource to create Edge 2.0 gateway
Example code:
# Create a DHCP Edge as a Spoke
resource "aviatrix_edge_spoke" "test" {
gw_name = "edge-test"
site_id = "site-123"
management_interface_config = "DHCP"
wan_interface_ip_prefix = "10.60.0.0/24"
wan_default_gateway_ip = "10.60.0.0"
lan_interface_ip_prefix = "10.60.0.0/24"
ztp_file_type = "iso"
ztp_file_download_path = "/ztp/download/path"
local_as_number = "65000"
prepend_as_path = [
"65000",
"65000",
]
}
Or you can use mc-edge module, which combines multiple Edge 2.0 gateways creation, attachment and peering. I will save it for later once you are familiar with the full workflow:
https://registry.terraform.io/modules/terraform-aviatrix-modules/mc-edge/aviatrix/latest
You will need the Zero Touch Provisioning (ZTP) file generated for next step. We will be using ISO in this example, which need to be mounted as CD-ROM. You may also use Cloud-Init file to bootstrap the Edge VM.
NOTE the ZTP file contains sensitive keypairs, and has a 24 hour validity. If expired you will have to delete the registered Edge gateway and re-register the Edge gateway and obtain the ZTP file again.
In Controller, you can observe the Edge gateway under Multi-Cloud Transit -> List -> Spoke section, the status is waiting
In CoPilot, you can observe the Edge gateway under AirSpace -> Gateways -> Edge Gateways, the status is waiting
Edge 2.0 gateway VM initialization
Request Edge image via Aviatrix support page: https://support.aviatrix.com/ -> Support -> Ticket creation -> Select Aviatrix Edge Image request in Product section.
You will need to make sure that Edge VM MGMT/WAN/LAN interface will be on different subnet.
- You will import the download Edge image to your hypervisor
- Edge MGMT interface need to have internet access to reach Aviatrix Controller
- Edge WAN interface need to have line of sight to Aviatrix Transit Gateways via either private connectivity or Internet. If using Internet as underlay, each edge gateway must have it’s own unique public IP assigned, so Aviatrix Transit Gateway can distinguish the IPSec tunnels from different edge gateways.
- Edge LAN interface need to be able to form BGP peer with onprem router.
- You will need to load the ZTP ISO file to Edge VM’s virtual CD-ROM
- Upon detecting the ISO file, Edge will program it’s MGMT/WAN/LAN interface using the ZTP configuration, then call back to controller. (MGMT egress public IP will be programed to Controller security group to allow this connection)
Reference of interfaces and reachability of each interfaces
Source | Destination | Protocol | Port | Purpose |
---|---|---|---|---|
WAN eth0 | Aviatrix Transit Gateway eth0 Private / Public IP | UDP | 500 | IPsec |
WAN eth0 | Aviatrix Transit Gateway eth0 Private / Public IP | UDP | 4500 | IPsec |
Mgmt eth2 | DNS server | UDP | 53 | DNS lookup |
Mgmt eth2 | Aviatrix Controller FQDN or Public IP addresscontroller.aviatrixnetwork.comspire-server.aviatrixnetwork.com | TCP | 443 | Edge to Controller |
Mgmt eth2 | release.aviatrix.com | TCP | 443 | Edge to release.aviatrix.com |
Upon successful initialization, you should observe the gateway status now is UP
In controller
In CoPilot
Edge 2.0 gateway attach to Aviatrix Transit
NOTE: This is only possible after the Edge gateway status is UP
Edge 2.0 gateway is similar to an Aviatrix spoke gateway, the attachment workflow is also similar
Attach Edge 2.0 gateway in Controller
Multi-Cloud Transit -> Setup -> Attach/Detach
Attach Edge 2.0 gateway in CoPilot
AirSpace -> Gateways -> Edge Gateways -> … -> Manage Transit Gateway Attachment
Attach Edge 2.0 Gateway in Terraform
Example code:
# Create an Aviatrix Edge as a Spoke Transit Attachment
resource "aviatrix_edge_spoke_transit_attachment" "test_attachment" {
spoke_gw_name = "edge-as-a-spoke"
transit_gw_name = "transit-gw"
}
Edge 2.0 Gateway BGP peering with on-prem devices
NOTE: This is only possible after the Edge gateway status is UP
BGP Peering using Controller
Multi-Cloud Transit -> External Connection -> Select External Device, BGP, LAN
Select site ID and Edge 2.0 gateway, and provide remote device IP, AS number
BGP peering via CoPilot
As of writing, CoPilot v3.5.7 doesn’t have this feature added
BGP peering via Terraform
You can use aviatrix_edge_spoke_external_device_conn resource to peer Edge 2.0 gateway with on-premise devices
Example code:
# Create an Edge as a Spoke External Device Connection
resource "aviatrix_edge_spoke_external_device_conn" "test" {
site_id = "site-abcd1234"
connection_name = "conn"
gw_name = "eaas"
bgp_local_as_num = "123"
bgp_remote_as_num = "345"
local_lan_ip = "10.230.3.23"
remote_lan_ip = "10.0.60.1"
}
Edge 2.0 terraform module
Now that we are familiar with the workflow, you can noticed that the step of creating ZTP ISO file, have it load to the VM cannot be easily codified. When you are looking the mc-edge module, it actually tried to combine all three steps in one module. So before the Edge gateway status is UP, you cannot attach the gateway to the transit, you also cannot create the BGP peering with on-prem devices
Sample code before Edge is UP
module "branch1" {
source = "terraform-aviatrix-modules/mc-edge/aviatrix"
version = "v1.2.1"
site_id = "branch1"
edge_gws = {
gw1 = {
# ZTP configuration
ztp_file_download_path = "/mnt/c/edge"
ztp_file_type = "iso"
gw_name = "branch1gw1"
# Management interface
management_interface_config = "DHCP"
# management_interface_ip_prefix = "172.16.1.10/24"
# management_default_gateway_ip = "172.16.1.1"
# DNS
# dns_server_ip = "8.8.8.8"
# secondary_dns_server_ip = "8.8.4.4"
# WAN interface
wan_interface_ip_prefix = "10.1.13.2/24"
wan_default_gateway_ip = "10.1.13.1"
wan_public_ip = "77.77.77.77" # Required for peering over internet
# Management over private or internet
enable_management_over_private_network = false
management_egress_ip_prefix = "33.33.33.33/32"
# LAN interface configuraation
lan_interface_ip_prefix = "10.1.12.2/24"
local_as_number = 65010
# prepend_as_path = [65010,65010,65010]
# spoke_bgp_manual_advertise_cidrs = ["192.168.1.0/24"]
## Only enable this when the Edge Gateway status shows up, after loaded ZTP ISO/CloudInit
# bgp_peers = {
# peer1 = {
# connection_name = "branch1gw1-peer"
# remote_lan_ip = "10.1.12.1"
# bgp_remote_as_num = 65300
# }
# }
# Change attached to true, after the Edge Gateway status shows up, after loaded ZTP ISO/CloudInit
# Attach to transit GWs
transit_gws = {
transit1 = {
name = module.mc-transit.transit_gateway.gw_name
attached = false
enable_jumbo_frame = false
enable_insane_mode = true
enable_over_private_network = true
# spoke_prepend_as_path = [65010,65010,65010]
# transit_prepend_as_path = [65001,65001,65001]
}
}
}
}
}
Sample code after the Edge is UP
module "branch1" {
source = "terraform-aviatrix-modules/mc-edge/aviatrix"
version = "v1.2.1"
site_id = "branch1"
edge_gws = {
gw1 = {
# ZTP configuration
ztp_file_download_path = "/mnt/c/gitrepos/dish"
ztp_file_type = "iso"
gw_name = "branch1gw1"
# Management interface
management_interface_config = "DHCP"
# management_interface_ip_prefix = "172.16.1.10/24"
# management_default_gateway_ip = "172.16.1.1"
# DNS
# dns_server_ip = "8.8.8.8"
# secondary_dns_server_ip = "8.8.4.4"
# WAN interface
wan_interface_ip_prefix = "10.1.13.2/24"
wan_default_gateway_ip = "10.1.13.1"
wan_public_ip = "77.77.77.77" # Required for peering over internet
# Management over private or internet
enable_management_over_private_network = false
management_egress_ip_prefix = "33.33.33.33/32"
# LAN interface configuraation
lan_interface_ip_prefix = "10.1.12.2/24"
local_as_number = 65010
# prepend_as_path = [65010,65010,65010]
# spoke_bgp_manual_advertise_cidrs = ["192.168.1.0/24"]
# Only enable this when the Edge Gateway status shows up, after loaded ZTP ISO/CloudInit
bgp_peers = {
peer1 = {
connection_name = "branch1gw1-peer"
remote_lan_ip = "10.1.12.1"
bgp_remote_as_num = 65300
}
}
# Change attached to true, after the Edge Gateway status shows up, after loaded ZTP ISO/CloudInit
# Attach to transit GWs
transit_gws = {
transit1 = {
name = module.mc-transit.transit_gateway.gw_name
attached = true
enable_jumbo_frame = false
enable_insane_mode = true
enable_over_private_network = true
# spoke_prepend_as_path = [65010,65010,65010]
# transit_prepend_as_path = [65001,65001,65001]
}
}
}
}
}
Conclusion
- Aviatrix is very flexible to create persistent connectivity with your existing architecture. This can be very helpful in migration phrase. You may read the other blog about Azure migration: Migrate from Azure vNet hub and spoke architecture to Aviatrix Transit
- While hybrid architecture can co-exist in long time span, it does increase operational complexity, also will reduce your visibility.
- Edge 2.0 operates like an Aviatrix Spoke and shared similar characteristics, such as advanced NAT and integration with CoPilot, full terraform support. More features will be developed on Edge 2.0. I will have a separate blog post cover some of the existing features on Edge 2.0.