When working with your customer or partners, often time you will need to exchange files. Sending files via email might not be as secure. If you have an Azure subscription and you may utilize Azure Storage Account SFTP feature to allow secure file transfer.
Create Azure Storage Account
In following example, I’m creating a Azure Blob Storage or Azure Data Lake Storage Gen 2 LRS storage account storageaccountmn9ujm in East US, click on next
In the Advanced tab, select Enable hierarchical namespace, then select Enable SFTP
Select Review and Create
Create local SFTP account and assign permission
Once the storage account is created, go to the storage account -> left side panel -> Settings -> SFTP
Click Add local user
Specify Username I’d pick SSH Key pair over SSH password for additional security You have a choice to either
Generate new key pair
Useful when you intend to use this SFTP for yourself, and you didn’t already have a key pair, Azure will generate public key and private key, it will store the public key, and prompt you to download the private key
Use existing key stored in Azure
Useful when you already have key pairs, and uploaded the public key into Azure. Since I already have public key uploaded, I’m selecting this option for demo purpose
Use existing public key
You may ask your customer / partner of their public key, then enter in Key name OR Public key section
Always remember to provide a description to the key for future references
Switch to Permissions tab
First Containers -> Create new -> Create a container for storing the uploaded files. Note, by default the container is Private
Give permission to the local user for the container.
If you want the user to go directly to your container, then specify the container name as the Home (landing) directory
Click on Add, you will see the local user gets created, pay attention to the Connection String
How customer / partner upload files
You will need to provide Connection string to your customer / partner, in my example: storageaccountmn9ujm.secureupload@storageaccountmn9ujm.blob.core.windows.net
Customer / partner will then goes to command prompt
sftp -i <private_key> <connection_string>
For example below, after add the new connection to list of known hosts, you will be at sftp> prompt
To send a file
put <local_file_path> <remote_file_name>
Validate file uploaded
Storage account -> Storage browser -> Blob Containers -> <container> -> check if the file is there
Many businesses are using Equinix as global provider of secure data centers and high-speed interconnection services. Aviatrix customers are interested in leveraging Equinix to connect to other partners via secure and highspeed private connections. In this blog, I will show you how to connect Azure Express Route Circuit with Aviatrix Edge as Transit (Virtual Device) in Equinix, when IPSec encryption isn’t required on the ER circuit to Aviatrix Edge as Transit.
Steps needed:
Create Express Route (ER) Circuit in Azure Portal
Create Edge as Transit (EAT) in Equinix
Use the ER Circuit service key to create Express Route connection towards the EAT interfaces
Configure Azure Private Peering with unique VLAN ID.
Wait Equinix update Destination VLAN tag (if this never happens then you might have a stale VLAN ID on the port, contact Equinix support to resolve it)
Confirm Layer 2 connectivity between ER Microsoft Enterprise Edge Router (MSEE) and EAT
Establish BGP over LAN connection between MSEE and Aviatrix EAT
Connect the Express Route Circuit with Express Route gateway for your Azure vNet hub or vWAN.
Create Express Route (ER) Circuit in Azure Portal
In Azure Portal, search for ExpressRoute circuits, click on create, select your Subscription and Resource Group, select Standard Resilience if the connection is only coming from a single Equinix location
Provide Region, Circuit Name, Port Type (Provider in this case), Peering Location, Provide (Equinix), Bandwidth, SKU and Billing model
You may choose to Enable recommended alert rules and provide Tags in the next two steps, then Review + Create
Caution: If you will use Equinix Internet Access, then you cannot choose your own ASN for EAT, as Equinix Internet Access will assign a private ASN to you for the Internet Access BGP connection, as in my lab worksheet, the 64516 was assigned by Equinix as WAN2/eth3 is connected to Equinix Internet Access (I will create a blog just for this topic)
You will need to Plan each interfaces configuration as needed. Note that EAT reserves eth2 for Management port, hence WAN port number doesn’t always align with eth port number. Example worksheet I used for lab planning:
For this example, we will use WAN5/eth6 for Azure ER connection, note BGP (underlay) is set to Off. The primary CIDR/default Gateway need to be in the same /30 range, and Azure MSEE will always use the 2nd IP. Also note the MSEE always have 12076 as AS number.
Example of WAN5/eth6 configuration
Use the ER Circuit service key to create Express Route connection towards the EAT interfaces
In the newly created ER Circuit, copy the Service key string
In Equinix Portal, select Connections -> Create Connection
Find Microsoft Azure -> Select Service -> Create Connection (Network Edge Device)
Click on Create a Connection to Azure Express Route
Select Virtual Device -> Location -> Virtual Devices (without HA), Redundant Device (with HA, EAT is always Active/Active), Cluster (doesn’t apply as it’s Active/Standby) -> Select EAT device -> Enter ER Circuit Service Key copied from last step -> Equinix will validate the key, if it’s valid you can click on Next
I have been struggling with Seller C-tag. There’s a high chance of entering a Seller C-Tag causing a conflicting VLAN ID during the connection creation, and when this happens, you will not be able to clean up the connection without engaging Equinix support. Equinix suggest to leave Seller C-Tag empty. Also it’s important to select the interfaces you assigned on EAT devices.
You may review and submit order.
In the Connections -> Connections inventory page, you will see the two newly created connection in provisioning state with blue hour glass icons
Upon some waiting and refresh on the page, you will see the blue hour glass icons turned into orange triangle with exclamation mark, indicating Pending BGP.
Click on the connection itself, note down the Destination VLAN Tagging ID, for following example: 113. Also note that the Unique ID (UUID) is needed when you open a support ticket with Equinix, if you run into issues.
Configure Azure Private Peering with unique VLAN ID
Switch back to Azure Portal, note that the ER circuit Provider status is now Provisioned
Peerings -> Click on Azure private (notice it’s Not provisioned)
Enter Peer ASN (EAS ASN), Primary/Secondary /30 subnets for the two links, as well as the same VLAN ID from last step Equinix ER connection Destination VLAN Tagging, you may enter BGP MD5 key as Shared key here if you have specified any (With Aviatrix you will have to use terraform aviatrix_transit_external_device_conn resource bgp_md5_key and backup_bgp_md5_key value to set the MD5 key).
Wait Equinix update Destination VLAN tag
Wait for 5 minutes, if you are staring at Equinix portal under Connections -> Connection inventory, you may see popup showing on right side says the connections are provisioned. There seems to have a backend process picking up the changes on Azure Private Peering.
If you refresh on the Connections inventory page, you may see the status gradually change from Equinix (left side blue hour glass) pending – Provider (right side Green up arrow) provisioned
To both Equinix (left side green up arrow) provisioned and Provider (right side Green up arrow) provisioned
Now you click on the connection itself. You may notice the Destination VLAN Tagging change from 113 to 113.113 (where the right side portion is the VLAN ID set on Azure Private Peering). If this never changes, you will need to open support ticket with Equinix to see if the same VLAN ID became stale stuck on the port that need to be clean up.
Confirm Layer 2 connectivity between ER Microsoft Enterprise Edge Router (MSEE) and EAT
Switch back to Azure Portal -> ER Circuit -> Peerings -> Select Azure private -> View ARP records
If everything goes well, you should see both MSEE and EAT NIC MAC. If you only see MSEE MAC address: 1. The VLAN configuration didn’t work. 2. You forgot to assign an IP to the interface on EAT
You can also find EAT interface MAC address in Equinix portal -> Network Edge -> Virtual Device Inventory -> Select your EAT device -> Interfaces
Now you should be able to perform ping from EAT to the MSEE to confirm L2 Connectivity CoPilot -> Diagnostics -> Diagnostic tools -> Select EAT instance -> Select interface -> ping MSEE IP
Establish BGP over LAN connection between MSEE and Aviatrix EAT
Name the connection, select BGP over LAN, select primary EAT, uncheck ActiveMesh,
Add additional Remote device, enter Remote device IP and Local LAN IP for both primary and HA EAT, enter 12076 as MSEE ASN, then save the configuration.
If everything goes well, then the two BGP connections will come up
Connect the Express Route Circuit with Express Route gateway for your Azure vNet hub or vWAN.
After the connection is created, you can click on ER Circuit -> Peerings -> Select Azure Private -> View route table
It will show the local route (AS 65515) and remote route received from EAT
HINT
If you are terminating two ER circuits on the same EAT device. Both ER MSEE will have AS 12076, so the prefix learned from one ER circuit will be dropped by the other ER circuit due to the same ASN number. You will have to use EAT Manual BGP Advertisement towards both ER BGP over LAN connections to replace 12076 with EAT’s ASN.
az vm image terms accept --urn publisher:offer:sku:version
To accept Aviatrix Controller Marketplace offer:
az vm image terms accept --urn aviatrix-systems:aviatrix-bundle-payg:aviatrix-enterprise-bundle-byol:latest
To accept Aviatrix CoPilot Marketplace offer:
az vm image terms accept --urn aviatrix-systems:aviatrix-copilot:avx-cplt-byol-01:latest
To validate, replace ‘accept’ with ‘show’ and rerun the command, it should say:
"accepted": true,
To cancel the offer:
az vm image terms cancel --urn aviatrix-systems:aviatrix-bundle-payg:aviatrix-enterprise-bundle-byol:latest
az vm image terms cancel --urn aviatrix-systems:aviatrix-copilot:avx-cplt-byol-01:latest
As enterprises increasingly strive to simplify their Multi-Cloud Networking Management, Aviatrix’s MCNA (Multi-Cloud Networking Architecture) has emerged as a leading solution. The MCNA offers a standardized and flexible multi-cloud networking infrastructure that spans regions and clouds with ease. When combined with Aviatrix Edge, enterprises benefit from a dynamic and dedicated data plane, and a wealth of day-2 operational enhancements, including deep visibility, auditing, and troubleshooting capabilities. The standardized and intelligent networking helps bridge skill gaps and advanced security features provide an extra layer of protection to the network. It’s no wonder that so many organizations are turning to Aviatrix’s Multi-Cloud Transit architecture.
However, for organizations that have already deployed cloud networking solutions, the migration process can be perceived daunting, with the fear of risk and the possibility of wanting to revert back to their previous architecture.
In this blog post, I will guide you through the process of migrating from an Azure native vNet hub and spoke architecture to Aviatrix Transit. I will show you how to do so seamlessly and with minimal risk, ensuring a smooth transition to the advanced features and benefits of Aviatrix MCNA.
Azure only support IPSec, not GRE as tunneling protocol
On-premise device must be able to support BGP over IPSec, also it is manual process to build/maintain IPSec tunnels from on-premise device.
Express Route to Aviatrix Transit – Option 2, where we utilize Azure Route Server and some smart design to bridge the BGP between Aviatrix Transit, Azure Route Server and ExpressRoute Gateway, then towards on-premise device. This solution have fpllowing constrains:
ARS can only exchange up to 200 routes with ERGW
No end to end encryption between on-premsie towards Aviatrix Transit, only MACSec can be used between on-premise devices towards Microsoft Enterprise Edge router.
Additional architecture complexity/cost and lose operational visibility, also this solution is in Azure only, means you will end up with different architecture in different clouds.
For enterprises moving business critical applications to multi-cloud, needing point to point encryption without sacrificing the throughput, looking for unified solution that can provides enterprise level visibility, control, audibility, standardization and troubleshooting toolsets. Neither above two solution would be ideal. IPsec is industry standard utilized by all Cloud Service Providers, but how are we able to overcome it’s limitation of 1.25Gbps per tunnel?
Aviatrix’s winning formular solves these challenges with it’s patented technology called High Performance Encryption (HPE). It automatically builds multiple IPSec tunnels over either private connectivity such as express route, or over Internet. Aviatrix then combine these tunnels into a logical pipe, to achieve line rate of encryption up to 25Gbps per appliance.
Aviatrix have several products supports HPE from edge locations: CloudN (Physical form factor), Edge 1.0 and Edge 2.0 (Virtual and physical form factor). They can be deployed on-premise data center, co-location, branch offices or retail locations. These edge devices enable customer enterprise grade visibility and control, monitoring and auditing and troubleshooting capability, as well as providing unified architecture for all major Cloud Service Providers. These solutions enable us easily push all the goodies Aviatrix Transit and Spoke architecture from the clouds towards on-premise.
In this blog, we will focus on how CloudN is deployed and connect to Aviatrix Transit. Here below is the architecture diagram:
In the last blog post: Express Route to Aviatrix Transit – Option 1, we have discussed how to use BGP over IPSec as overlay from customer on-premise devices to Aviatrix Transit Gateways. This solution have these two constrains:
Each IPSec tunnel have 1.25G throughput limit
Azure only support IPSec, not GRE as tunneling protocol
For customer have larger ExpressRoute circuit such as 5Gbps or 10Gbps and above, but doesn’t have encryption requirement or on-premise devices isn’t capable IPSec, option 1 isn’t ideal. In this blog, I will discuss the architecture to connect to Aviatrix Transit and utilize the full ExpressRoute bandwidth.
In following architecture diagram:
Aviatrix Controller must be 6.8 and above to support Multi-Peer BGPoLAN for Azure Route Server. Azure Route Server require full-mesh peering to avoid single point of failure, which would result in black-hole in traffic flow.
Aviatrix Transit Gateway must have Insane Mode (High Performance Encryption HPE) enabled, as well as BGP Over LAN enabled.
Aviatrix Controller allows “Propagate gateway route”, only on the BGP over LAN interface subnet route table.
Instead of deploying ExpressRoute Gateway (ERGW) inside of Aviatrix Transit vNet, we need to create a separate vNet to house ERGW and Azure Route Server (ARS)
When native vNet peering been used between Spoke to Aviatrix Transit, if ARS is in the same Aviatrix Transit vNet, traffic from spoke to on-premise will bypass Aviatrix Transit gateway, as more specific route from on-premise will be inserted by ERGW point to ERGW, where Aviatrix programs less specific RFC1918 routes point to Aviatrix Transit
This would apply also to HPE enabled Aviatrix Spoke, as when HPE is enabled, native vNet peering is been used as underlay to build multiple tunnels between Aviatrix Spoke Gateway to Aviatrix Transit Gateways.
From Aviatrix Transit vNet created a vNet peering with ARS_ERGW_VNet, and enabled use_remote_gateways. This will enable ERGW to propagate learned route to Transit vNet
From ARS_ERGW_VNet vNet created a vNet peering with Aviatrix Transit vNet, and enabled allow_gateway_transit.
vNet peering is subject to $0.01 per GB for both inbound and outbound data transfer.
Multi-hop eBGP is enabled between ARS and Aviatrix Transit Gateway
ARS requires dedicated RouteServerSubnet subnet, /27 or above, cannot have UDR or Network Security Group (NSG) attached
ERGW requires dedicated GatewaySubnet subnet, /27 or above, cannot have UDR or Network Security Group (NSG) attached
Branch to Branch must be enabled on ARS to exchange routes between ARS and ERGW
ARS Support 8 BGP peers, each peer support up to 1000 routes
ARS can only exchange up to 200 routes with ERGW
ARS is a route reflector, and it’s not in traffic path.
ARS Cost: $0.45USD/hour or $324 USD per month, and for a service that’s not in data path, it’s not cheap
When you create or delete an Azure Route Server from a virtual network that contains a Virtual Network Gateway (ExpressRoute or VPN), expect downtime until the operation complete. Reference Link
Today we are starting to discuss first of three options to connect on-premise to Aviatrix Transit. This architecture allows you to use existing IPSec and BGP capable networking device to connect to Aviatrix Transit. I’ve listed brief steps and constrains highlighted
Create ExpressRoute (ER) Circuit
Configure Azure Private BGP Peering from the ER Circuit to On-Premise device
Deploy Aviatrix Transit vNet and Transit Gateways
Create GatewaySubnet for ExpressRoute Gateway (ERGW) in Aviatrix Transit vNet and deploy Express Route Gateway
Create ER Connection between the ER circuit and ERGW
Validate BGP route propagated to Aviatrix Transit Gateway eth0 subnet route table and connectivity. This connectivity will act as underlay
Create BGP over IPSec tunnels from on-premise device towards Aviatrix Transit Gateways as overlay to exchange on-premise routes with cloud routes
Each IPSec tunnel have 1.25G throughput limit
Azure only support IPSec, not GRE as tunneling protocol
Maximum number of IPv4 routes advertised from Azure private peering from the VNet address space for an ExpressRoute connection is 1000. But since we are using BGP over IPSec overlay, we can bypass this limit.
When you connect a third party Network Virtual Appliance (NVA), such as Firewall, SDWan, Load Balancers, Routers, Proxies etc into Azure, you need to redirect network traffic towards these NVAs for data processing. In the past, this often resulted in manual route table entries to be created and maintained, different route table entries need to be entered in source, destination, NVAs, as well as potently in the middle of the data path.
In Azure, these static entries are called User Defined Routing (UDR), where you specify the target IP range, target next hop device type, and next hope IP address. A simple use case of UDR is shown below where we have two vNets that connecting via a NVA in a hub vNet. Now imagine you have hundreds of vNets and your workload constantly changes, these manually entries are error prone, inflexible and super difficult to troubleshoot. While cloud is promising agile and flexible, these manual entries is counter intuitive and slows everything down.
There are times when we need to build connectivity between Aviatrix Transit and Azure VPN Gateways. I’ve created a terraform module for a quick lab demonstrate how this can be done.