Essential Guide to AWS Governance – Part 4: Send CloudTrail logs from AWS Accounts to a central Elasticsearch Instance and visualize them using Kibana

In the previous blog post I showed you how to enable CloudTrail on Project1 AWS Account (Account ID: 222222222222) and send it to a central S3 Bucket in another AWS Account (Account ID: 111111111111).

Like always I definitely recommend you read the previous blog posts in the same series:

In this blog post we are going to achieve the following goals:

  • Create an Elasticsearch Instance on the Security and Auditing AWS Account (Account ID: 111111111111) and configure Access Policies
  • Enable CloudTrail on Project1 AWS Account (Account ID: 222222222222) and deliver logs via CloudWatch and a Lambda Function to the Elasticsearch Instance above

So here are the steps we will take:

  1. Create an Elasticsearch domain on the Security and Auditing AWS Account (Account ID: 111111111111) and configure Access Policies to accepts logs coming from the Project1 AWS Account (Account ID: 222222222222)
  2. On the Project1 AWS Account (Account ID: 222222222222) create and configure CloudTrail, CloudWatch, and a Lambda function to deliver logs to Elasticsearch

Create an Elasticsearch domain on the Security and Auditing AWS Account and configure Access Policies to accepts logs coming from the Project1 AWS Account

Log into your Security and Auditing AWS Account (Account ID: 111111111111) and open Elasticsearch service from the menu and “create a new domain”. For this example, I give this new domain the following name: esmaeiles1:

Make sure you select the right size for your Elasticsearch Instance. Remember Elasticsearch runs on an instance behind the scene but you have no access to it, however you can always scale up your instance if your load increases.

When it comes to Network Configuration, give it VPC access only. This means it is only available through the VPC and then you need to select the VPC, subnet, and security group:

And then you need to configure the Access policy. Please paste the following JSON code into the template editor:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222222222222:root"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:111111111111:domain/esmaeiles1/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:111111111111:domain/esmaeiles1/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "x.y.z.t"
}
}
}
]
}

The JSON code above gives the Elasticsearch instance permission to receive data (logs) from the Project1 AWS Account (Account ID: 222222222222). It also provides access to the Elasticsearch Instance from a client with an IP address of x.y.z.t (please change it with an IP address). Please take note this IP address needs to be in the same VPC or it must have a public IP address.

In the end please review and confirm the creation of your Elasticsearch domain. Once the creation process is complete, you need to log in to your client (with the IP address x.y.z.t), open the browser and open the address to Kibana. It is given to you on the overview page of your Elasticsearch domain:

and here is what you will see:

Please take note your Kibana interface is going to be pretty much blank and the screenshot above is from an Elasticsearch instance with some data in it. So, no panic 🙂

On the Project1 AWS Account create and configure CloudTrail, CloudWatch, and a Lambda function to deliver logs to Elasticsearch

Log into the Project1 AWS Account and open CloudTrail from the menu. You can follow this guide here to configure the settings for your CloudTrail. Please take note this link shows you how to configure your CloudTrail to send the logs to an S3 Bucket. You can skip that part if you want because it is completely optional. I personally prefer to send the logs also to an S3 Bucket for archival reasons.

Once you are done creating your CloudTrail, go back to it and scroll down to CloudWatch Logs and click Configure, create a log group with the name “CloudTrail/AuditLogs” and click Continue:

It then takes you to a different page where it asks you to create a new Role and Policy. Let the diffult settings be as is and click Allow to be redirected back to the CloudTrail page. Please note sometimes you might see an error page when creating a policy in this way. Simply repeat the steps and the error will disappear, and you will see the following:

From the console menu open CloudWatch and click Logs and under the Log Group “CloudTrail/AuditLogs”. Select it and from the Actions menu, click Stream to Amazon Elasticsearch Service and you will be redirected to a different page.

Select Another Account and provide AWS Elasticsearch ARN and AWS Elasticsearch Endpoint. You can get them from the overview page of the Elasticsearch service. Please remember to drop the https:// at the beginning fo the Endpoint:

For the Lambda IAM Execution Role select Create New IAM Role and let it create a new Role and Policy for you similar to what we did in the previous step. Click Next and select JSON as the log format and then finish the configuration and start streaming.

The process above creates a Lambda function for you which is triggered every time there is a new log and sends the log to the Elasticsearch service in the Security and Auditing AWS Account.

You are pretty much done. Now if you give it 10 minutes and check back your Kibana, you will see a lot of logs from the Project1 AWS Account. you can then use Kibana to create visualizations and graphs with all this data.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: