In this blog, I will show you step by step integrating Okta IdP (identity provider) with Aviatrix controller.
Continue readingSecure Aviatrix Controller with Azure Application Gateway V2
Aviatrix controller is already hardened. You may further lock it down with Settings -> Controller -> Access Security -> Controller Security Group Management. The controller would be protected by Security Group allowing access only from Aviatrix Gateways. Customer can add their own egress public IPs and CoPilot public IPs to security group, allowing inbound HTTPs access to the Aviatrix controller.
Enterprises already utilizing Azure Application Gateway/ WAF may want to place the Aviatrix controller behind it, for tighter security. This blog post shows how to place the Aviatrix controller behind Azure Application Gateway/WAF
Continue readingAdd SSL Certificate to Aviatrix Controller
When first launched Aviatrix controller from marketplace, the Aviatrix controller give itself a self-signed certificate, and you have to use https://controller-ip to access it, which may not meet compliance requirement.
This blog will talk about getting a public trusted SSL certificate for Aviatrix controller.
Continue readingEnable private connectivity to workloads deployed in multiple default VPCs – Part 1
Scenario: One of our customers are primary in Azure, after merger and acquisitions, them acquired hundreds of AWS accounts, where workloads are deployed to default VPCs, which all have this address space: 172.31.0.0/16
They are looking for a solution to provide bi-directional private connectivity to these workloads in AWS from Azure without overhead of route management, also provide visibility to the traffic.
Continue readingEnable private connectivity to workloads deployed in multiple default VPCs – Part 2
Scenario: One of our customers are primary in Azure, after merger and acquisitions, them acquired hundreds of AWS accounts, where workloads are deployed to default VPCs, which all have this address space: 172.31.0.0/16
They are looking for a solution to provide bi-directional private connectivity to these workloads in AWS from Azure without overhead of route management, also provide visibility to the traffic.
Continue readingTechTalk | Securing Cloud Egress—The Easy Way
When operating in the cloud, enterprises often struggle with how to gain control of network traffic leaving their environments in a centralized, cost-effective, and CSP-agnostic way.
In this webinar, you’ll learn how to make cloud egress architecture simple, repeatable, and automated—including how to:
- Gain visibility and control of internet-destined traffic in a cost-effective way (FQDN, subtopics distributed, centralized)
- Insert next-generation firewalls into internet-outbound traffic and deal with thousands of route entries
- Scale up and scale out your egress firewalls in an active manner and retain existing flows
- Plus, the benefits of leveraging Aviatrix FireNet and ThreatIQ, ThreatGuard, and Anomaly Detection.
Terraform AWS Cross-Account access
Pre-requisite
- Two AWS accounts: AccountA and AccountB
- IAM programmatic access user already setup and working for Terraform in AccountA, let’s call this user Terraform-User, and it already have role assigned in AccountA
- Now that we are going to use the same Terraform-User access key and secret to work on resources in AccountB
Create a new role in AccountB
- Trusted entity -> AWS account -> since AccountB need to trust AccountA, enter AccountA’s account ID
- Assign required permission polices to this role, eg: AdministratorAccess
- Assign a role name, eg: CrossAccountSignin
- Example of the role JSON created
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountA_Account_ID:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
- Note down the ARN of this role, eg:
arn:aws:iam::AccountB_Account_ID:role/CrossAccountSignin
Create and assign policy in AccountA
- Use following JSON definition
- “Resource” point to the ARN of the CrossAccountSignin role created in AccountB
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::AccountB_Account_ID:role/CrossAccountSignin"
}
]
}
- Assign this policy to AccountA IAM user: Terraform-User
Assume role in Terraform
- providers.tf
Notice an alias gets created for account_b
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "us-east-1"
alias = "account_b"
assume_role {
role_arn = "arn:aws:iam::AccountB_Account_ID:role/CrossAccountSignin"
}
}
- main.tf
resource "aws_vpc" "account_a_vpc" {
cidr_block = "10.0.1.0/24"
tags = {
"Name" = "account_a_vpc"
}
}
resource "aws_vpc" "account_b_vpc" {
provider = aws.account_b
cidr_block = "10.0.2.0/24"
tags = {
"Name" = "account_b_vpc"
}
}
- resource.aws_vpc.account_a_vpc will create VPC in AccountA implicitly
- resource.aws_vpc.account_b_vpc will create VPC in AccountB by explicitly specifying provider = aws.account_b
Cross account access to data
Similarly to resource block, you can perform the same for data block, example:
- Same providers.tf
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "us-east-1"
alias = "account_b"
assume_role {
role_arn = "arn:aws:iam::AccountB_Account_ID:role/CrossAccountSignin"
}
}
- data.tf
data "aws_availability_zones" "az_zones" {
}
data "aws_availability_zones" "app_az_zones" {
provider = aws.account_b
}
- data.aws_availability_zones.az_zones will retrieve availability zones as Terraform-User from AccountA
- data.aws_availability_zones.app_az_zones” will retrieve availability zones assume role in AccountB
Cross account for module
Assume we have following folder structure:
|_ main.tf
| providers.tf
|_ modules
|_ app
|_ main.tf
|_ providers.tf
Root /providers.tf have following statement as before:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "us-east-1"
alias = "account_b"
assume_role {
role_arn = "arn:aws:iam::AccountB_Account_ID:role/CrossAccountSignin"
}
}
Root /main.tf have following statement
module "app1" {
source = "./modules/app"
.
.
.
}
If you run it, you may find resources gets created in the default account : AccountA, where Terraform-User is resided. How do we make the resource create in AccountB instead?
Think of module a mini block of terraform code that also require it’s own provider block. If you don’t specify anything in /modules/app/providers.tf, it will implicitly have this block, basically it’s looking for a provider called aws
provider "aws" {
}
So we will modify /main.tf like this:
module "app1" {
source = "./modules/app"
providers = {
aws = aws.account_b
}
.
.
.
}
This is telling within the module, provider.aws is equal to root provider.aws.account_b.
If you rerun terraform apply. you will notice:
- Resources created in AccountA remains
- New resources get created in AccountB now
- Warning message:
Warning: Provider aws is undefined
│
│ on main.tf line 8, in module "app1":
│ 8: aws = aws.account_b
│
│ Module module.app1 does not declare a provider named aws.
│ If you wish to specify a provider configuration for the module, add an entry for aws in the required_providers block within the module.
To make Terraform happy, add following lines in /modules/app/providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
Add permanent environment variables
When you are using terraform or some other tools requiring environment variables, you may find the environment variables doesn’t stay between sessions.
Here’s how I took care of it:
Windows
This is rather easy, just go to System Properties by running sysdm.cpl in command line, then click on Environment Variables
Add or edit existing environment variables, such as AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY, the settings will take effect next time when you launch your command prompt or PowerShell session
Linux / Mac
First you have to find out what shell you are using:
$ echo $SHELL
- If it returns: /bin/bash, then you are using bash, and need to edit ~/.bashrc
- If it returns: /bin/zsh, then you are using zsh, and need to edit ~/.zshrc
Example of ~/.bashrc file. Make sure to place export in front of each line, and there should be no whitespace around equal sign =
$ cat ~/.bashrc
# environment variables
export AVIATRIX_CONTROLLER_IP='****'
export AVIATRIX_PASSWORD='****'
export AVIATRIX_USERNAME='****'
export AWS_ACCESS_KEY_ID='****'
export AWS_SECRET_ACCESS_KEY='****'
export GOOGLE_APPLICATION_CREDENTIALS='****'
Next time when you launch console session, these settings will take effect
Compare AWS resource configurations
So you have created your resources manually in AWS and it works fine, but when you tried to create the resource using Terraform and it just won’t work?
I’ve ran into this issue when tried to create S3 + Policy + Roles for Palo Alto bootstrap, and here below is how to resolve this, please feel free to comment if you have better methods.
Background:
I’ve followed this article and created S3 bucket, folder structure, uploaded bootstrap.xml and init-cfg.txt under config folder and it works fine. But when I tried to terraform scripts from my buddy and it just doesn’t work. There must be some delta that’s causing the issue.
It’s a very easy problem to tackle in Azure, for most resources, you can choose to export to ARM or BICEP template, which will reveal all configurations.
It isn’t as straight forward in AWS, when I’m looking at AWS CLI, aws s3 command have following subcommands
$ aws s3 ?
usage: aws [options] <command> <subcommand> [<subcommand> ..] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument subcommand: Invalid choice, valid choices are:
ls | website
cp | mv
rm | sync
mb | rb
presign
None of them related to describe the current configuration
There is an s3api command, but it appears that you must query each subcommands, such as following huge list, what if my solution is much more complicated than just S3, then this will snowball much quicker to manage
get-bucket-accelerate-configuration
get-bucket-acl
get-bucket-analytics-configuration
get-bucket-cors
get-bucket-encryption
get-bucket-intelligent-tiering-configuration
get-bucket-inventory-configuration
get-bucket-lifecycle-configuration
get-bucket-location
get-bucket-logging
get-bucket-metrics-configuration
get-bucket-notification-configuration
get-bucket-ownership-controls
get-bucket-policy
get-bucket-policy-status
get-bucket-replication
get-bucket-request-payment
get-bucket-tagging
get-bucket-versioning
get-bucket-website
get-object
get-object-acl
get-object-attributes
get-object-legal-hold
get-object-lock-configuration
get-object-retention
get-object-tagging
get-object-torrent
get-public-access-block
Then I’ve come across AWS Config, which should track configuration of each resources
AWS Config – Getting started
- First goes to the region of the resources you want to track, and search for Config
- Click on Get started. I have selected Include global resources as there’s a need to track roles and policies and choose to create a new bucket
- Rules are meant to be auditing purpose to evaluate if your resources is following best practices, which isn’t useful for my situation, so I didn’t select anything
- Finally review and confirm
AWS Config – comparing resources
Going to AWS Config -> Dashboard, it nicely listed all discovered resources by category. Since we need to compare the S3 configuration, then I’ve clicked on S3 Bucket
Find the two S3 buckets to compare, notice this is actually under Resources , then filtered by Resource Type = AWS S3 Bucket
In the middle section, expand View Configuration Item (JSON), then copy to your favorite tool for comparison (VS Code / WinMerg)
Comparison screenshot:
It’s easy to see following section is missing
"PublicAccessBlockConfiguration": {
"blockPublicAcls": true,
"ignorePublicAcls": true,
"blockPublicPolicy": true,
"restrictPublicBuckets": true
},
Cleanup
Keep in mind that there is a cost for using AWS Config. If you only need it for comparing resources configuration, after you are done, you should disable it:
Settings -> Note Recording is on -> Edit
Uncheck Enable recording
Confirm
Now that Recording is off
Terraform – difference between data.aws_iam_policy_document and in-line JSON policy
So I’ve got this block of terraform code, which simply just allow the role to assume role
data "aws_iam_policy_document" "bootstrap_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role" "bootstrap" {
name = "bootstrap-${random_string.bucket.result}"
assume_role_policy = data.aws_iam_policy_document.bootstrap_role.json
}
When check in AWS Console, I can see following Trust relationships created with:
“Sid”: “”
When I would create the role in AWS Console, I would not have this section:
“Sid”: “”
Tried to update the terraform code to following, and it made no difference:
data "aws_iam_policy_document" "bootstrap_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
sid = null
}
}
resource "aws_iam_role" "bootstrap" {
name = "bootstrap-${random_string.bucket.result}"
assume_role_policy = data.aws_iam_policy_document.bootstrap_role.json
}
After some research, I’ve settled with this code with incline JSON policy instead
resource "aws_iam_role" "bootstrap" {
name = "bootstrap-${random_string.bucket.result}"
assume_role_policy = jsonencode(
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : "ec2.amazonaws.com"
},
"Action" : "sts:AssumeRole"
}
]
}
)
}
Now it’s nice and clean