Skip to content

Aviata Cloud Solo Flight Challenge (Detect with Telemetry)

Introduction

Estimated Time: 30 minutes

You have performed the attack, now its time to see what was detected, and how we will do the investigation. You will be in the AWS Web Console for the rest of the lab.

  • Evaluate the Flow Logs
  • Investigate the cluster files
  • Try out CloudTrail

Investigating the Flow Logs

We will investigate logs using CloudWatch Log Insights, which is a feature of CloudWatch Logs that allows you to search and analyze log data. We have already setup VPC Flow Logs to flow into CloudWatch, so we can jump there.

  1. Head to the CloudWatch Logs console by typing CloudWatch in the search bar and clicking on the CloudWatch service.

    CloudWatch

  2. Click on the Log groups link in the left-hand menu and then select the logs for the VPC Flow Logs. You should see a log group with a name of /aviata/vpcflow

    VPC Flow Logs

  3. Once we have found the log group, we want to start looking at the data in the View in Logs Insights button.

    View in Logs Insights

    Easy Button

    We could have saved you some time by sending you right to the log insights, but we want to make sure you can get around the management console.

    Link to CloudWatch Log Insights))

  4. You should see a query editor with a query already in place. This query will show you all the logs from the VPC Flow Logs for the last hour. Click the Run query button to see the results. Let's start with a query to see unique destination ports

    Query

    fields @timestamp, @message
    | filter ispresent(dstAddr)
    | filter dstAddr not like /^10.*/
    | stats count(*) by dstPort
    

    Sample Results

    Query Editor

  5. You should see a list of destination ports and the number of times they were accessed. We do not know what is happening on these ports, only that there is some communication. We can rule out some ports are they are common in the environment. But, the port 9999 is one we want to investigate, it was part of the attack.

    Query

    fields @timestamp, @message
    | filter ispresent(dstAddr)
    | filter dstAddr not like /^10.*/
    | filter dstPort == 9999
    | stats count(*) by srcAddr
    

    Sample Results

    10.60.2.88          50
    10.60.134.240       55
    

    Why are we seeing two different source addresses communicating to port 9999? 10.60.134.* is the subnet of the EKS cluster and 10.60.2.* is the subnet of the simulator VM. We can see that the simulator VM is communicating with the EKS cluster. This is the reverse shell that was setup in the attack. Our VPC flow logs are being collected from multiple VPC's in our environment, so we will see the communications between all resources.

  6. We will circle in on the Destination address is not in our private subnet, and the destination port is 9999. What is the destination address?

    Query

    fields @timestamp, @message
    | filter ispresent(dstAddr)
    | filter dstAddr not like /^10.*/
    | filter dstPort == 9999
    | stats count(*) by dstAddr
    

    Sample Results

    3.148.107.52       97
    
  7. This should be the IP address of the attacker, in our case its the ACE135-Simulator VM that we are operating on. VPC Flow Logs are interesting, but they do not provide much context. The next step is to investigate the cluster files.

Investigating the Cluster Files

We have a number of logs coming from the cluster that we want to investigate. The /aws/eks/aviata-eks-cluster log group is where we will start.

  1. Select the Browse Log Groups button and deselect the /aviata/vpcflow log group and select /aws/eks/aviata-eks-cluster log group.

    Log Groups Log Groups

  2. We are looking to see when a pod was created.

    Query

    fields @timestamp, @message
    | filter verb == "create"
    

    Sample Results

    Sample Results

  3. We can see that create commands are captured in the logs, but this is too much data. We can filter down to look for a POD that is created by the attacker. We know that a port of 9999 was used, that might be part of the create command. We can filter down on that.

    Query

    fields @timestamp, @message
    | filter verb == "create"
    | filter @message like /9999/
    

    Sample Results

    Sample Results

  4. That was a lot of details. If we look through the logs, we can see that some of the logs provide image information. We can filter down to look for the image that was used in the attack.

    Query

    fields requestObject.spec.containers.0.image, @message
    | filter verb == "create"
      and @message like /9999/
      and ispresent(requestObject.spec.containers.0.image)
    

    Sample Results

    Sample Results

  5. Oh, that is not good. We can expand the log to provide just the information we may want to include in the report.

    Query

    fields requestObject.spec.containers.0.image as image,
      requestObject.spec.containers.0.args as arguments,
      requestObject.spec.containers.0.volumeMounts as mounts
    | filter verb == "create"
      and @message like /9999/
      and ispresent(requestObject.spec.containers.0.image)
    

    Sample Results

    Sample Results

  6. We have the information about what was deployed, but our big question is HOW it was deployed. Luckily, we have our AWS credentials that gave it away. We can pull that information out of the query.

    Query

    fields requestObject.spec.containers.0.image as image,
      user.extra.accessKeyId.0 as accessKeyId,
      userAgent,
      user.extra.canonicalArn.0 as arn
    | filter verb == "create"
      and @message like /9999/
      and ispresent(requestObject.spec.containers.0.image)
    

    Sample Results

    Sample Results

We can see that the installation of the raesene/ncat image was done by the ace135-related-teal role. This is the simulator virtual machine. We also have access key ID's, and we have the user agent string, which points to Kubectl. We knew this, since perpetrated the attack, but you can easily see how this information can be used to track down the attacker.

CloudTrail shows all

We now can pivot to CloudTrail logs to see the full picture of the attack. We know that the attacker stole creds from the EKS cluster, so any activity with the AWS API will be logged in CloudTrail as if the command came from that role.

  1. Drop back into your terminal, the one with the prompt student@ace135.sans.labs ~/code/ace135/docs/chapter4 (main)$ and run this command to get the IP address of the EKS VM.

    Get the IP address of aviata-eks-node

    INSTANCE_ID=$(aws ec2 describe-instances \
      --filters "Name=tag:Name,Values=aviata-eks-node" \
      --query "Reservations[].Instances[].InstanceId" \
      --output text)
    echo "Instance ID: $INSTANCE_ID"
    

    Sample Output

    Instance ID: i-02c708c19f29d6d9e
    
  2. Now, head back to CloudWatch and select the CloudTrail Log Group. Shortcut to the CloudWatch Insights

  3. We now want to look for any activity that happened from this instance.

    Query

    Remember to replace this instance ID i-02c708c19f29d6d9e with the one for your EKS node.

    fields eventName, @message
    | filter userIdentity.arn like /i-02c708c19f29d6d9e/
    

    Sample Results

    Sample Results

We can be more specific and really pull out the data by altering the query. Can you figure out what objects were stolen?

Conclusion

We have looked at the logs from VPC Flow Logs, EKS cluster, and CloudTrail. We have been able to track down the attacker, and see what they did. This is a very high-level overview of what can be done. There are many more logs that can be investigated, and many more tools that can be used to track down the attacker. We have only scratched the surface.

We will wrap up the workshop in just a moment.