When you are using terraform or some other tools requiring environment variables, you may find the environment variables doesn’t stay between sessions.
Here’s how I took care of it:
Windows
This is rather easy, just go to System Properties by running sysdm.cpl in command line, then click on Environment Variables
Add or edit existing environment variables, such as AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY, the settings will take effect next time when you launch your command prompt or PowerShell session
Linux / Mac
First you have to find out what shell you are using:
$ echo $SHELL
If it returns: /bin/bash, then you are using bash, and need to edit ~/.bashrc
If it returns: /bin/zsh, then you are using zsh, and need to edit ~/.zshrc
Example of ~/.bashrc file. Make sure to place export in front of each line, and there should be no whitespace around equal sign =
So you have created your resources manually in AWS and it works fine, but when you tried to create the resource using Terraform and it just won’t work?
I’ve ran into this issue when tried to create S3 + Policy + Roles for Palo Alto bootstrap, and here below is how to resolve this, please feel free to comment if you have better methods.
Background:
I’ve followed this article and created S3 bucket, folder structure, uploaded bootstrap.xml and init-cfg.txt under config folder and it works fine. But when I tried to terraform scripts from my buddy and it just doesn’t work. There must be some delta that’s causing the issue.
It’s a very easy problem to tackle in Azure, for most resources, you can choose to export to ARM or BICEP template, which will reveal all configurations.
It isn’t as straight forward in AWS, when I’m looking at AWS CLI, aws s3 command have following subcommands
$ aws s3 ?
usage: aws [options] <command> <subcommand> [<subcommand> ..] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument subcommand: Invalid choice, valid choices are:
ls | website
cp | mv
rm | sync
mb | rb
presign
None of them related to describe the current configuration
There is an s3api command, but it appears that you must query each subcommands, such as following huge list, what if my solution is much more complicated than just S3, then this will snowball much quicker to manage
Then I’ve come across AWS Config, which should track configuration of each resources
AWS Config – Getting started
First goes to the region of the resources you want to track, and search for Config
Click on Get started. I have selected Include global resources as there’s a need to track roles and policies and choose to create a new bucket
Rules are meant to be auditing purpose to evaluate if your resources is following best practices, which isn’t useful for my situation, so I didn’t select anything
Finally review and confirm
AWS Config – comparing resources
Going to AWS Config -> Dashboard, it nicely listed all discovered resources by category. Since we need to compare the S3 configuration, then I’ve clicked on S3 Bucket
Find the two S3 buckets to compare, notice this is actually under Resources , then filtered by Resource Type = AWS S3 Bucket
In the middle section, expand View Configuration Item (JSON), then copy to your favorite tool for comparison (VS Code / WinMerg)
Keep in mind that there is a cost for using AWS Config. If you only need it for comparing resources configuration, after you are done, you should disable it:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.0"...
- Installing hashicorp/aws v4.5.0...
- Installed hashicorp/aws v4.5.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
While I’m trying to use Terraformer, I’m getting an error:
$ terraformer-aws-windows-amd64.exe import aws --resources=s3
2022/03/18 11:57:29 aws importing default region
2022/03/18 11:57:30 aws importing... s3
2022/03/18 11:57:34 aws error initializing resources in service s3, err: no EC2 IMDS role found, operation error ec2imds: GetMetadata, exceeded maximum number of attempts, 3, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.
2022/03/18 11:57:34 aws Connecting....
Recently I’ve come to figure out how to bootstrap Palo Alto firewall while integrated with AWS GWLB and Aviatrix FireNet, here are my learning journal for future references:
Additional command to troubleshoot bootstrap, or you can watch console session messages
debug logview component bts_details
However when tried to pass traffic through firewall, even when policy is wide open, package capture still shows traffic get dropped when sending from GWLB endpoints
After comparing with a working manually configured firewall with an bootstrapped firewall, here’s the observations:
When bootstrap.xml loaded via bootstrap, and export the configuration right away, only public-key got modified, which makes sense as new firewall gets new ssh keys
I’ve also learned that when there’s a conflict setting between init-cfg.txt and bootstrap.xml, the setting in init-cfg.txt wins. Since we are not using Panorama at this point, all values of the init-cfg.txt should be just empty like this:
Enterprises enjoy more development freedom and agility in the cloud. However, this increased freedom sometimes leads to the hasty replication of networks to help speed up productivity. As a result, IP address overlap can stunt your cloud agility. Find out how Aviatrix Transit and NAT can help you can remove these roadblocks from your cloud network.
Engineers responsible for building and managing public cloud network entities like VNets, VPCs, VCNs are constantly challenged with 90s-style static routing in public clouds. CSPs provide discrete constructs that are powerful on their own yet add enormous responsibility on enterprises to build, manage, and operationalize.
Join us for a session to see how to automate cloud network builds without the overhead of route table management. You will see a demo of how Aviatrix simplifies this huge challenge and makes network management in the cloud a breeze with deep Day-2 operational capabilities.