Migrate from Azure vNet hub and spoke architecture to Aviatrix Transit

As enterprises increasingly strive to simplify their Multi-Cloud Networking Management, Aviatrix’s MCNA (Multi-Cloud Networking Architecture) has emerged as a leading solution. The MCNA offers a standardized and flexible multi-cloud networking infrastructure that spans regions and clouds with ease. When combined with Aviatrix Edge, enterprises benefit from a dynamic and dedicated data plane, and a wealth of day-2 operational enhancements, including deep visibility, auditing, and troubleshooting capabilities. The standardized and intelligent networking helps bridge skill gaps and advanced security features provide an extra layer of protection to the network. It’s no wonder that so many organizations are turning to Aviatrix’s Multi-Cloud Transit architecture.

However, for organizations that have already deployed cloud networking solutions, the migration process can be perceived daunting, with the fear of risk and the possibility of wanting to revert back to their previous architecture.

In this blog post, I will guide you through the process of migrating from an Azure native vNet hub and spoke architecture to Aviatrix Transit. I will show you how to do so seamlessly and with minimal risk, ensuring a smooth transition to the advanced features and benefits of Aviatrix MCNA.

Continue reading

Express Route to Aviatrix Transit – Option 3

In the past blogs, we have reviewed two options to connect from on-premise to Aviatrix Transits:

  • Express Route to Aviatrix Transit – Option 1, where we build BGP over IPSec overlay towards Aviatrix transit. This solution have following constrains:
    • Each IPSec tunnel have 1.25G throughput limit
    • Azure only support IPSec, not GRE as tunneling protocol
    • On-premise device must be able to support BGP over IPSec, also it is manual process to build/maintain IPSec tunnels from on-premise device.
  • Express Route to Aviatrix Transit – Option 2, where we utilize Azure Route Server and some smart design to bridge the BGP between Aviatrix Transit, Azure Route Server and ExpressRoute Gateway, then towards on-premise device. This solution have fpllowing constrains:
    • ARS can only exchange up to 200 routes with ERGW
    • No end to end encryption between on-premsie towards Aviatrix Transit, only MACSec can be used between on-premise devices towards Microsoft Enterprise Edge router.
    • Additional architecture complexity/cost and lose operational visibility, also this solution is in Azure only, means you will end up with different architecture in different clouds.

For enterprises moving business critical applications to multi-cloud, needing point to point encryption without sacrificing the throughput, looking for unified solution that can provides enterprise level visibility, control, audibility, standardization and troubleshooting toolsets. Neither above two solution would be ideal. IPsec is industry standard utilized by all Cloud Service Providers, but how are we able to overcome it’s limitation of 1.25Gbps per tunnel?

Aviatrix’s winning formular solves these challenges with it’s patented technology called High Performance Encryption (HPE). It automatically builds multiple IPSec tunnels over either private connectivity such as express route, or over Internet. Aviatrix then combine these tunnels into a logical pipe, to achieve line rate of encryption up to 25Gbps per appliance.

Aviatrix have several products supports HPE from edge locations: CloudN (Physical form factor), Edge 1.0 and Edge 2.0 (Virtual and physical form factor). They can be deployed on-premise data center, co-location, branch offices or retail locations. These edge devices enable customer enterprise grade visibility and control, monitoring and auditing and troubleshooting capability, as well as providing unified architecture for all major Cloud Service Providers. These solutions enable us easily push all the goodies Aviatrix Transit and Spoke architecture from the clouds towards on-premise.

In this blog, we will focus on how CloudN is deployed and connect to Aviatrix Transit. Here below is the architecture diagram:

Continue reading

Express Route to Aviatrix Transit – Option 2

In the last blog post: Express Route to Aviatrix Transit – Option 1, we have discussed how to use BGP over IPSec as overlay from customer on-premise devices to Aviatrix Transit Gateways. This solution have these two constrains:

  • Each IPSec tunnel have 1.25G throughput limit
  • Azure only support IPSec, not GRE as tunneling protocol

For customer have larger ExpressRoute circuit such as 5Gbps or 10Gbps and above, but doesn’t have encryption requirement or on-premise devices isn’t capable IPSec, option 1 isn’t ideal. In this blog, I will discuss the architecture to connect to Aviatrix Transit and utilize the full ExpressRoute bandwidth.

In following architecture diagram:

  • Aviatrix Controller must be 6.8 and above to support Multi-Peer BGPoLAN for Azure Route Server. Azure Route Server require full-mesh peering to avoid single point of failure, which would result in black-hole in traffic flow.
  • Aviatrix Transit Gateway must have Insane Mode (High Performance Encryption HPE) enabled, as well as BGP Over LAN enabled.
    • Aviatrix Controller allows “Propagate gateway route”, only on the BGP over LAN interface subnet route table.
  • The on-premise to ExpressRoute circuit private peering is similar to Express Route to Aviatrix Transit – Option 1
  • Instead of deploying ExpressRoute Gateway (ERGW) inside of Aviatrix Transit vNet, we need to create a separate vNet to house ERGW and Azure Route Server (ARS)
    • When native vNet peering been used between Spoke to Aviatrix Transit, if ARS is in the same Aviatrix Transit vNet, traffic from spoke to on-premise will bypass Aviatrix Transit gateway, as more specific route from on-premise will be inserted by ERGW point to ERGW, where Aviatrix programs less specific RFC1918 routes point to Aviatrix Transit
    • This would apply also to HPE enabled Aviatrix Spoke, as when HPE is enabled, native vNet peering is been used as underlay to build multiple tunnels between Aviatrix Spoke Gateway to Aviatrix Transit Gateways.
    • From Aviatrix Transit vNet created a vNet peering with ARS_ERGW_VNet, and enabled use_remote_gateways. This will enable ERGW to propagate learned route to Transit vNet
    • From ARS_ERGW_VNet vNet created a vNet peering with Aviatrix Transit vNet, and enabled allow_gateway_transit.
    • vNet peering is subject to $0.01 per GB for both inbound and outbound data transfer.
  • Multi-hop eBGP is enabled between ARS and Aviatrix Transit Gateway
  • ARS requires dedicated RouteServerSubnet subnet, /27 or above, cannot have UDR or Network Security Group (NSG) attached
  • ERGW requires dedicated GatewaySubnet subnet, /27 or above, cannot have UDR or Network Security Group (NSG) attached
  • Branch to Branch must be enabled on ARS to exchange routes between ARS and ERGW
  • ARS Support 8 BGP peers, each peer support up to 1000 routes
  • ARS can only exchange up to 200 routes with ERGW
  • ARS is a route reflector, and it’s not in traffic path.
  • ARS Cost: $0.45USD/hour or $324 USD per month, and for a service that’s not in data path, it’s not cheap
  • When you create or delete an Azure Route Server from a virtual network that contains a Virtual Network Gateway (ExpressRoute or VPN), expect downtime until the operation complete. Reference Link
Continue reading

Express Route to Aviatrix Transit – Option 1

Today we are starting to discuss first of three options to connect on-premise to Aviatrix Transit. This architecture allows you to use existing IPSec and BGP capable networking device to connect to Aviatrix Transit. I’ve listed brief steps and constrains highlighted

  • Create ExpressRoute (ER) Circuit
  • Configure Azure Private BGP Peering from the ER Circuit to On-Premise device
  • Deploy Aviatrix Transit vNet and Transit Gateways
  • Create GatewaySubnet for ExpressRoute Gateway (ERGW) in Aviatrix Transit vNet and deploy Express Route Gateway
  • Create ER Connection between the ER circuit and ERGW
  • Validate BGP route propagated to Aviatrix Transit Gateway eth0 subnet route table and connectivity. This connectivity will act as underlay
  • Create BGP over IPSec tunnels from on-premise device towards Aviatrix Transit Gateways as overlay to exchange on-premise routes with cloud routes
  • Each IPSec tunnel have 1.25G throughput limit
  • Azure only support IPSec, not GRE as tunneling protocol
  • Maximum number of IPv4 routes advertised from Azure private peering from the VNet address space for an ExpressRoute connection is 1000. But since we are using BGP over IPSec overlay, we can bypass this limit.
Continue reading

Learning of Trace Route, ICMP and IP route table

We are using traceroute very often and sometimes take it for granted, until an very interesting question hit me and we have do dive a little deeper to get the answer. Here’s the full story:

Aviatrix CloudN is an appliance that helps to deliver line rate of encryption from on-premises towards the Aviatrix Transit Gateways, it is shipped with three interfaces:

  • eth0 : WAN interface, this is where IPSec tunnels will be built towards Aviatrix Transit Gateways. Then BGP session will be established between CloudN to Aviatrix Transit Gateways.
  • eth1: LAN interface, this is where BGP is established between CloudN with on-premise router
  • eth2: MGMT interface, this is where you connect to CloudN for management, as well as where CloudN connects to internet for software updates.

It’s very common practice to have all three interfaces connected to the same router, have VRF configured on router to segment the three interfaces. As you may recall in my previous blog: Direct Connect to Aviatrix Transit – Option 3. The WAN/LAN/MGMT(not in the diagram) can connect to the same router as show below.

After we have CloudN inline with traffic, when customer tried to do a traceroute from on-premises towards cloud, they discovered that the CloudN hop was responded by the management interface IP, rather than LAN interface IP.

Customer is rightfully concerning that if the data traffic is actually going through MGMT interface instead of from LAN interface.

Continue reading

Get a free public SSL certificate for testing environment using Posh-ACME

In my previous blog post: Add SSL Certificate to Aviatrix Controller, we went through how to obtain a free public SSL certificate using ZeroSSL. It got great interface and you can get up to three 90 days certificate for free, but have following drawbacks:

  • If you have two expired certificate, it counts as part of total three free 90 days certificate, and now you can only get one more.
  • It does take a bit tickling to get a full chain certificate using commands.

Then I was introduced Posh-ACME, a PowerShell module to request and obtain free SSL certificates, let’s take a look how it works.

Continue reading

Aviatrix NAT use case – Use spoke gateway as egress gateway for private subnet

In AWS, subnet that doesn’t have default 0.0.0.0/0 point to Internet Gateway (IGW) is considered as private subnet. Where subnet that have default 0.0.0.0/0 point to IGW is considered as public subnet. Instances running on private subnet still need to access Internet to download patches, packages etc. You may use AWS NAT Gateway on public subnet to provide this connectivity. NAT Gateway cost $0.045 USD per hour plus $0.045 per GB data processed.

If you already have Aviatrix Spoke Gateway deployed, and need internet access (egress) from private subnet, also you don’t need any fancy egress control, then you may reuse the existing Aviatrix Spoke Gateway as Egress Gateway by using SNAT rule.

If you need better control and traffic inspection, you should consider Aviatrix FQDN egress gateway for L7 egress control based on Fully Qualified Domain Name eg: allow https://github.com deny https://youtube.com. Or if deep packet inspection using Next Generation Firewall (NGFW) is required, then you may consider Aviatrix FireNet with NGFW integration.

Simple diagram:

Continue reading

Azure Route Server BGP multi-peer with Aviatrix Transit

When you connect a third party Network Virtual Appliance (NVA), such as Firewall, SDWan, Load Balancers, Routers, Proxies etc into Azure, you need to redirect network traffic towards these NVAs for data processing. In the past, this often resulted in manual route table entries to be created and maintained, different route table entries need to be entered in source, destination, NVAs, as well as potently in the middle of the data path.

In Azure, these static entries are called User Defined Routing (UDR), where you specify the target IP range, target next hop device type, and next hope IP address. A simple use case of UDR is shown below where we have two vNets that connecting via a NVA in a hub vNet. Now imagine you have hundreds of vNets and your workload constantly changes, these manually entries are error prone, inflexible and super difficult to troubleshoot. While cloud is promising agile and flexible, these manual entries is counter intuitive and slows everything down.

Continue reading

Direct Connect to Aviatrix Transit – Option 3

In the last two blog posts, we discussed two methods for connecting on-premises to Aviatrix Transit via Direct Connect:

  • Option 1: Use detached Virtual Private Gateway (VGW) to build BGP over IPSec tunnels with Aviatrix Transit. This solution has following constrains: 1.25Gbps per IPSec tunnel, max 100 prefixes between on-premise and cloud, also potential exposure to the man in the middle attack.
  • Option 2: Use attached VGW to build underlay connectivity between on-premise router/firewall and Aviatrix Transit VPC, then use GRE tunnels to build overlay connectivity between on-premise router/firewall to Aviatrix Transit. This solution would provide 5Gbps per GRE tunnel, and bypass the 100 prefixes limitation. However this solution only works with AWS, and still have potential exposure to the man in the middle attack.

Today, more and more enterprises are going into multiple cloud service providers (CSPs). Some due to merger and acquisitions, or partner/ vendor preferences, or simply one CSP provides superior products that are not offered by other CSPs.

Is there a solution that can standardize networking architecture across all CSPs, and provide necessary securities and bandwidths, and more importantly provide enterprise grade features, and also help enterprise obtain day 2 operational excellencies?

Continue reading

Direct Connect to Aviatrix Transit – Option 2

In my last blog post, I have covered one option to connect On-Premise data center to Aviatrix Transit via Direct Connect, it’s easy to implement however with following draw backs:

  • Each IPSec tunnel between Aviatrix Transit and AWS Virtual Private Gateway (VGW) is limited to 1.25Gbps of throughput, and we can only have 4 tunnels which limits the aggregated throughput to 5Gbps. For customer want to have higher throughput, this won’t be viable.
  • Private Virtual Interface support up to 100 BGP routes, BGP session will go DOWN when more routes been advertised
  • Between On-Premise to VGW, traffic maybe protected by MACSec, but still expose to man in the middle attack. Reference article: Securing your network connection to the cloud: MACSec vs. IPSec

How do we overcome these constrains? Let me take you through the second option connecting to Aviatrix Transit via Direct connect.

Continue reading