As enterprises increasingly strive to simplify their Multi-Cloud Networking Management, Aviatrix’s MCNA (Multi-Cloud Networking Architecture) has emerged as a leading solution. The MCNA offers a standardized and flexible multi-cloud networking infrastructure that spans regions and clouds with ease. When combined with Aviatrix Edge, enterprises benefit from a dynamic and dedicated data plane, and a wealth of day-2 operational enhancements, including deep visibility, auditing, and troubleshooting capabilities. The standardized and intelligent networking helps bridge skill gaps and advanced security features provide an extra layer of protection to the network. It’s no wonder that so many organizations are turning to Aviatrix’s Multi-Cloud Transit architecture.
However, for organizations that have already deployed cloud networking solutions, the migration process can be perceived daunting, with the fear of risk and the possibility of wanting to revert back to their previous architecture.
In this blog post, I will guide you through the process of migrating from an Azure native vNet hub and spoke architecture to Aviatrix Transit. I will show you how to do so seamlessly and with minimal risk, ensuring a smooth transition to the advanced features and benefits of Aviatrix MCNA.
- The center of the diagram shows a hub vNet with two Network Load Balancers
- The load balancer on left has set of firewalls behind it, and handles egress traffic
- The load balancer on right has another set of firewalls behind it, and it handles east/west traffic between spokes, as well as from spoke to on-premise
- The hub vNet also have VNG (Virtual Network Gateway) deployed, in my example I’m using a VPN Gateway, but this can also be Express Route Gateway
- The spoke vNets route tables have these entries:
- 0/0 point to ILB (Internal Load Balancer) of the egress firewalls
- RFC1918 IP addresses point to ILB of the East/West firewalls
- Spoke vNets are peered with hub vNet, and is set to use hub vNet’s gateway
- In the bottom, I have a CSR deployed in another vNet simulate on-premise environment
- The CSR establishes IPSec / BGP connection with VNG, it advertises on-prem vNet CIDR towards VNG, VNG in-turn programs on-prem vNet CIDR with the hub vNet and attached spoke vNets.
- VNG advertise hub and spoke vNet’s CIDR towards CSR
- The VNG has a default ASN of 65515, as such the CSR learned both spoke’s CIDR (hub CIDR too, but I didn’t put in there to simplify the diagram)
- Between spoke vNets, VMs first talk to ILB of E/W Firewall, then forwards to a specific firewall based on ILB hashing, then send to other vNet’s destination via the vNet peering
- From spoke to on-prem, VM again talks to ILB of E/W firewall, then forward to VNG, which then forward to CSR towards on-prem VM
- NOTE : For the spoke vNets that won’t be migrated, check existing spoke route tables if it does have Propagate Gateway Route enabled. VNG will program more specific route when this option is turned on, when you migrate other spokes vNets to Aviatrix, the more specific routes of the migrated spoke will gets propagated to non-migrated vNets, which will cause the return traffic to bypass hub Firewall. Ideally this should be disabled, otherwise other traffic between existing spoke vNets could bypass firewall as well.
The staging process
- We would deploy Aviatrix Transit with Firewall also deployed. To simplify illustration, these firewalls are for both East West traffic and egress. (The newly deployed Aviatrix Transit and Firewall are marked in Magenta )
- From the Aviatrix Transit, we would establish IPSec / BGP connection with the same CSR on-prem (The new IPSec / BGP tunnels are shown in Magenta )
- We would add additional CIDR for Aviatrix spoke gateways, /25 for HPE (High Performance Encryption) and /27 for non-HPE
- We would also duplicate existing route table, we can exam the duplicated route table to see if it will be appropriate after the cut over. (The duplicated route table shows in Magenta )
- When we attach the Aviatrix spoke gateways to Aviatrix Transit, we would only advertise the additional CIDR allocated to the spoke gateway.
- The CSR should still have learned spoke1/spoke2’s original CIDRs, additionally
- It learned the additional spoke GW CIDR from VNG with metric of 100
- It also learned the additional spoke GW CIDR from Aviatrix Transit with metric of 0 (which will be prefered)
- The newly learned routes in CSR are marked in Magenta
- Now it’s a good time to test connectivity from on-prem to the spoke gateway deployed, as well as get Firewall rules setup.
- It would also be a great idea to first test this out in a test spoke.
- Existing spoke to spoke, spoke to on-prem traffic flow stays the same way they were
The switch traffic phrase
When it’s time to switch over the traffic
- We de-attach the Aviatrix spoke from Transit
- We point the subnet to the new copied route table (The pointer is shown in Magenta)
- We remove vNet peering between the spoke and hub vNet, VNG will no longer advertise the spoke CIDR and additional CIDR for spoke gateways.
- We attach the Aviatrix spoke to the transit, Aviatrix controller would program the 0/0 route and RFC 1918 routes point to the Aviatrix spoke at this time.
- We start to advertise the spoke’s original and additional CIDR via Aviatrix Transit. In CSR route table, we can see that Aviatrix spoke CIDRs are now from Aviatrix Transit ASN, shown in Magenta
- The migrated spoke now travel through Aviatrix Spoke -> Aviatrix Transit -> Firewall -> CSR to access on-prem resources
- The migrated spoke now travel through Aviatrix Spoke -> Aviatrix Transit -> Firewall -> CSR -> VNG -> other spoke resources
- The migrated spoke now travel through Aviatrix Spoke -> Aviatrix Transit -> Firewall -> Internet for egress traffic
- If any testing fails at this time, we can easily revert back to original route table, and re-establish the vNet peering between spoke and hub vNet to restore original data path.
Summary
To summarize the whole process:
To ensure minimal disruption to your network during migration, we build parallel data paths, which allows phrased migration of selected vNets. The migrated vNet would be able to continue communicate with remaining vNet, as well as on-premise. To further reduce downtime, our approach is to create a copy of the route table and utilize it during the cutover phase to redirect the data path, and control the route advertisement to on-premises.
While manual traffic switching is an option, it can take up to several minutes to move the data path. To streamline this process, our Professional Services team has developed a toolkit that automates the entire process, resulting in only a few seconds of downtime during the cutover. This process is even smoother in AWS due to faster API response times and shorter route change propagation.
The toolkit also allows for staging and switching multiple vNets at once, and includes options for easy revert back to the original state. With this toolkit, we have successfully helped customers with hundreds and even thousands of vNets migration. Our Professional Services team has extensive experience in these migrations and can provide expert advice on migration plans, and assist in migrating brownfield workloads to Aviatrix MCNA.
Terraform code for Azure Hub and Spoke vNet peering and OnPrem access simulation
I’ve create terraform code for:
- Create two hub vNets with Palo Alto VM deployed for E/W and Egress Firewall
- Deploy VNG in the E/W vNet to simulate on-prem connectivity to a CSR in a different Resource Group/vNet, with test instance deployed.
- Spoke vNets are deployed with test instances and peer with the two hub vNets
Architecture of the deployed environment