Compare AWS resource configurations

So you have created your resources manually in AWS and it works fine, but when you tried to create the resource using Terraform and it just won’t work?

I’ve ran into this issue when tried to create S3 + Policy + Roles for Palo Alto bootstrap, and here below is how to resolve this, please feel free to comment if you have better methods.

Background:

I’ve followed this article and created S3 bucket, folder structure, uploaded bootstrap.xml and init-cfg.txt under config folder and it works fine. But when I tried to terraform scripts from my buddy and it just doesn’t work. There must be some delta that’s causing the issue.

It’s a very easy problem to tackle in Azure, for most resources, you can choose to export to ARM or BICEP template, which will reveal all configurations.

It isn’t as straight forward in AWS, when I’m looking at AWS CLI, aws s3 command have following subcommands

$ aws s3 ?

usage: aws [options] <command> <subcommand> [<subcommand> ..] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

aws: error: argument subcommand: Invalid choice, valid choices are:

ls                                       | website
cp                                       | mv
rm                                       | sync
mb                                       | rb
presign

None of them related to describe the current configuration

There is an s3api command, but it appears that you must query each subcommands, such as following huge list, what if my solution is much more complicated than just S3, then this will snowball much quicker to manage

get-bucket-accelerate-configuration
get-bucket-acl
get-bucket-analytics-configuration
get-bucket-cors
get-bucket-encryption
get-bucket-intelligent-tiering-configuration
get-bucket-inventory-configuration
get-bucket-lifecycle-configuration
get-bucket-location
get-bucket-logging
get-bucket-metrics-configuration
get-bucket-notification-configuration
get-bucket-ownership-controls
get-bucket-policy
get-bucket-policy-status
get-bucket-replication
get-bucket-request-payment
get-bucket-tagging
get-bucket-versioning
get-bucket-website
get-object
get-object-acl
get-object-attributes
get-object-legal-hold
get-object-lock-configuration
get-object-retention
get-object-tagging
get-object-torrent
get-public-access-block

Then I’ve come across AWS Config, which should track configuration of each resources

AWS Config – Getting started

  • First goes to the region of the resources you want to track, and search for Config
  • Click on Get started. I have selected Include global resources as there’s a need to track roles and policies and choose to create a new bucket
  • Rules are meant to be auditing purpose to evaluate if your resources is following best practices, which isn’t useful for my situation, so I didn’t select anything
  • Finally review and confirm

AWS Config – comparing resources

Going to AWS Config -> Dashboard, it nicely listed all discovered resources by category. Since we need to compare the S3 configuration, then I’ve clicked on S3 Bucket

Find the two S3 buckets to compare, notice this is actually under Resources , then filtered by Resource Type = AWS S3 Bucket

In the middle section, expand View Configuration Item (JSON), then copy to your favorite tool for comparison (VS Code / WinMerg)

Comparison screenshot:

It’s easy to see following section is missing

"PublicAccessBlockConfiguration": {
      "blockPublicAcls": true,
      "ignorePublicAcls": true,
      "blockPublicPolicy": true,
      "restrictPublicBuckets": true
    },

Cleanup

Keep in mind that there is a cost for using AWS Config. If you only need it for comparing resources configuration, after you are done, you should disable it:

Settings -> Note Recording is on -> Edit

Uncheck Enable recording

Confirm

Now that Recording is off

Terraform – difference between data.aws_iam_policy_document and in-line JSON policy

So I’ve got this block of terraform code, which simply just allow the role to assume role

data "aws_iam_policy_document" "bootstrap_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "bootstrap" {
  name               = "bootstrap-${random_string.bucket.result}"
  assume_role_policy = data.aws_iam_policy_document.bootstrap_role.json
}

When check in AWS Console, I can see following Trust relationships created with:
“Sid”: “”

When I would create the role in AWS Console, I would not have this section:
“Sid”: “”

Tried to update the terraform code to following, and it made no difference:

data "aws_iam_policy_document" "bootstrap_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }

    sid = null
  }
}

resource "aws_iam_role" "bootstrap" {
  name               = "bootstrap-${random_string.bucket.result}"
  assume_role_policy = data.aws_iam_policy_document.bootstrap_role.json
}

After some research, I’ve settled with this code with incline JSON policy instead

resource "aws_iam_role" "bootstrap" {
  name = "bootstrap-${random_string.bucket.result}"
  assume_role_policy = jsonencode(
    {
      "Version" : "2012-10-17",
      "Statement" : [
        {
          "Effect" : "Allow",
          "Principal" : {
            "Service" : "ec2.amazonaws.com"
          },
          "Action" : "sts:AssumeRole"
        }
      ]
    }
  )
}

Now it’s nice and clean

Use terraformer to import AWS resources in linux

I’ve come to a situation to import existing resources into terraform, and this tool seems to be interesting:
https://github.com/GoogleCloudPlatform/terraformer

The following example is in Linux, and my terraform installation is at /user/bin/terraform

$ which terraform
/usr/bin/terraform

By looking up the executable in following link:
https://github.com/GoogleCloudPlatform/terraformer/releases

Since I’m running Linux and need to import resources in AWS, I’ve downloaded:
terraformer-aws-linux-amd64 to /user/bin

cd /usr/bin
sudo wget https://github.com/GoogleCloudPlatform/terraformer/releases/download/0.8.19/terraform
er-aws-linux-amd64

Verified that I do have AWS credentials in environment variables

$ cat ~/.profile

# environment variables
export AWS_ACCESS_KEY_ID='****'
export AWS_SECRET_ACCESS_KEY='****'

Create a new folder for terraform files

mkdir ~/terraformer

Create providers.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
}

Run terraform init

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.0"...
- Installing hashicorp/aws v4.5.0...
- Installed hashicorp/aws v4.5.0 (signed by HashiCorp)

Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.

Terraform has been successfully initialized!

While I’m trying to use Terraformer, I’m getting an error:

$ terraformer-aws-windows-amd64.exe import aws --resources=s3
2022/03/18 11:57:29 aws importing default region
2022/03/18 11:57:30 aws importing... s3
2022/03/18 11:57:34 aws error initializing resources in service s3, err: no EC2 IMDS role found, operation error ec2imds: GetMetadata, exceeded maximum number of attempts, 3, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.
2022/03/18 11:57:34 aws Connecting....

After some research and the workaround is:

$ terraformer-aws-windows-amd64.exe import aws --resources=s3 --regions=us-west-1 --profile=""

2022/03/18 11:59:11 aws importing region us-west-1
2022/03/18 11:59:13 aws importing... s3
2022/03/18 11:59:16 aws done importing s3
2022/03/18 11:59:16 Number of resources for service s3: 3
2022/03/18 11:59:16 Refreshing state... aws_s3_bucket.tfer--pan-bootstrap-jye
2022/03/18 11:59:16 Refreshing state... aws_s3_bucket.tfer--cf-templates-6vv0zllrvrqt-us-west-1
2022/03/18 11:59:16 Refreshing state... aws_s3_bucket.tfer--bootstrap-dfa5077a6c367b5a
2022/03/18 11:59:22 Filtered number of resources for service s3: 3
2022/03/18 11:59:22 aws Connecting....
2022/03/18 11:59:22 aws save s3
2022/03/18 11:59:22 aws save tfstate for s3

new folders gets created with generated terraform code and state file:

Bootstrap Palo Alto with Aviatrix FireNet with AWS GWLB enabled

Recently I’ve come to figure out how to bootstrap Palo Alto firewall while integrated with AWS GWLB and Aviatrix FireNet, here are my learning journal for future references:

Validated environment:

  • Aviatrix Controller version: UserConnect-6.6.5404
  • Palo Alto Networks VM-Series Next-Generation Firewall (BYOL) 10.1.3

I’ve used following Terraform code to create an Aviatrix FireNet egress only transit

module "transit_firenet_egress" {
  source  = "terraform-aviatrix-modules/aws-transit-firenet/aviatrix"
  version = "5.0.0"
  name = "egress"
  cidr           = "10.1.0.0/20"
  region         = var.region
  account        = var.account
  firewall_image = "Palo Alto Networks VM-Series Next-Generation Firewall (BYOL)"
  inspection_enabled = false
  egress_enabled = true
  enable_egress_transit_firenet = true
  single_az_ha = false
  use_gwlb = true
  firewall_image_version = "10.1.3"
}

Then followed steps in this article:

https://docs.aviatrix.com/HowTos/transit_firenet_workflow_aws_gwlb.html?highlight=gwlb#palo-alto-network-pan
  • Step 3 can be skipped, as no need to active license
  • Step 4 can be skipped, as Firewall is configured as one-armed mode, there’s no WAN port
  • Step 6 can be skipped, as again Firewall is one-armed mode, there’s no need for route table changes

After the configuration and confirmed Firewall worked as expected. I’ve saved the configuration as bootstrap.xml

Then I’ve followed this article:

https://docs.aviatrix.com/HowTos/bootstrap_example.html
  • Created S3 bucket
  • Created IAM Role bootstrap-VM-S3-role and Policy bootstrap-VM-S3-policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}
  • Create following folder structure in S3 bucket
bootstrap-bucket/
  config/
    init-cfg.txt
    bootstrap.xml
  content/
  license/
  software/
  • Uploaded bootstrap.xml and init-cfg.txt
  • Modified Terraform code, so it looks like
module "transit_firenet_egress" {
  source  = "terraform-aviatrix-modules/aws-transit-firenet/aviatrix"
  version = "5.0.0"
  name = "egress"
  cidr           = "10.1.0.0/20"
  region         = var.region
  account        = var.account
  firewall_image = "Palo Alto Networks VM-Series Next-Generation Firewall (BYOL)"
  inspection_enabled = false
  egress_enabled = true
  enable_egress_transit_firenet = true
  single_az_ha = false
  use_gwlb = true
  firewall_image_version = "10.1.3"
  bootstrap_bucket_name_1 = "<s3-buck-name>"
  iam_role_1 = "bootstrap-VM-S3-role"
}

Palo CLI command to check if bootstrap worked:

show system bootstrap status

In my case, the bootstrap appeared to be working

Additional command to troubleshoot bootstrap, or you can watch console session messages

debug logview component bts_details

However when tried to pass traffic through firewall, even when policy is wide open, package capture still shows traffic get dropped when sending from GWLB endpoints

After comparing with a working manually configured firewall with an bootstrapped firewall, here’s the observations:

  • When bootstrap.xml loaded via bootstrap, and export the configuration right away, only public-key got modified, which makes sense as new firewall gets new ssh keys
  • I’ve also learned that when there’s a conflict setting between init-cfg.txt and bootstrap.xmlthe setting in init-cfg.txt wins. Since we are not using Panorama at this point, all values of the init-cfg.txt should be just empty like this:
type=
ip-address=
default-gateway=
netmask=
ipv6-address=
ipv6-default-gateway=
hostname=
vm-auth-key=
panorama-server=
panorama-server-2=
tplname=
dgname=
dns-primary=
dns-secondary=
op-command-modes=
dhcp-send-hostname=
dhcp-send-client-id=
dhcp-accept-server-hostname=
dhcp-accept-server-domain=
  • We also found out when using bootstrap with terraform, the GWLB isn’t enabled, CLI command to check:
show plugins vm_series aws gwlb
  • The management interface however has been swapped as expected
  • Since we do need to use GWLB to pass traffic to the firewall, tried following command:
request plugins vm_series aws gwlb inspect enable yes

Now the GWLB is enabled, and traffic is passing!

We modified init-cfg.txt to also enable GWLB during bootstrapping

type=
ip-address=
default-gateway=
netmask=
ipv6-address=
ipv6-default-gateway=
hostname=
vm-auth-key=
panorama-server=
panorama-server-2=
tplname=
dgname=
dns-primary=
dns-secondary=
op-command-modes=
dhcp-send-hostname=
dhcp-send-client-id=
dhcp-accept-server-hostname=
dhcp-accept-server-domain=
plugin-op-commands=aws-gwlb-inspect:enable

Now everything is working as expected, reference:

https://docs.paloaltonetworks.com/vm-series/10-1/vm-series-deployment/set-up-the-vm-series-firewall-on-aws/vm-series-integration-with-gateway-load-balancer/integrate-the-vm-series-with-an-aws-gateway-load-balancer/enabling-vm-series-integration-with-a-gwlb.html

Terraform code to create S3 bucket, role/ policy and sample bootstrap.xml and init-cfg.txt:
https://github.com/jye-aviatrix/terraform-aviatrix-aws-gwlb-palo-alto-10-bootstrap

TechTalk | Turn Static Public Cloud Routing Dynamic with Centralized Management


Engineers responsible for building and managing public cloud network entities like VNets, VPCs, VCNs are constantly challenged with 90s-style static routing in public clouds. CSPs provide discrete constructs that are powerful on their own yet add enormous responsibility on enterprises to build, manage, and operationalize.

Join us for a session to see how to automate cloud network builds without the overhead of route table management. You will see a demo of how Aviatrix simplifies this huge challenge and makes network management in the cloud a breeze with deep Day-2 operational capabilities.