AWS Certified DevOps Engineer - Professional Completed

Congratulations! You have successfully completed the AWS Certified DevOps Engineer - Professional  exam and you are now AWS Certified. You can now use the AWS Certified DevOps Engineer - Professional  credential to gain recognition and visibility for your proven experience with AWS services.
...
Overall Score: 78%
Topic Level Scoring:
1.0  Continuous Delivery and Process Automation: 70%
2.0  Monitoring, Metrics, and Logging: 93%
3.0  Security, Governance, and Validation: 75%
4.0  High Availability and Elasticity: 91%

Fix Timezone

WINDOWS

Check Timezone

> [System.TimeZone]::CurrentTimeZone.StandardName

Fix Timezone

> C:\windows\system32\tzutil /s "AUS Eastern Standard Time"

LINUX

$ timedatectl set-timezone Australia/Canberra

or

$ sudo ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime

Supercharge your CloudFormation templates with Jinja2 Templating Engine

If you are working in an AWS public cloud environment chances are that you have authored a number of CloudFormation templates over the years to define your infrastructure as code. As powerful as this tool is, it has a glaring shortcoming: the templates are fairly static having no inline template expansion feature (think GCP Cloud Deployment Manager.) Due to this limitation, many teams end up copy-pasting similar templates to cater for minor differences like environment (dev, test, prod etc.) and resource names (S3 bucket names etc.)

Enter Jinja2. A modern and powerful templating language for Python. In this blog post I will demonstrate a way to use Jinja2 to enable dynamic expressions and perform variable substitution in your CloudFormation templates.

First lets get the prerequisites out of the way. To use Jinja2, we need to install Python, pip and of course Jinja2.

Install Python

$ sudo yum install python

Install pip

$ curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
$ sudo python get-pip.py

Install Jinja2

$ pip install Jinja2

To invoke Jinja2, we will use a simple python wrapper script.

$ vi j2.py

Copy the following contents to the file j2.py

import os
import sys
import jinja2

sys.stdout.write(jinja2.Template(sys.stdin.read()).render(env=os.environ))

Save and exit the editor

Now let’s create a simple CloudFormation template and transform it through Jinja2:

$ vi template1.yaml

Copy the following contents to the file template1.yaml

—

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for {{ env[‘ENVIRONMENT_NAME’] }}
Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: InstallFiles-{{ env[‘AWS_ACCOUNT_NUMBER’] }}

As you can see it’s the most basic CloudFormation template with one exception, we are using Jinja2 variable for substituting the environment variable. Now lets run this template through Jinja2:

Lets first export the environment variables

$ export ENVIRONMENT_NAME=Development
$ export AWS_ACCOUNT_NUMBER=1234567890


Run the following command:

$ cat template1.yaml | python j2.py

The result of this command will be as follows:

—
AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for Development
Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: InstallFiles-1234567890

As you can see Jinja2 has expanded the variable names in the template. This provides us with a powerful mechanism to insert environment variables into our CloudFormation templates.

Lets take another example, what if we wanted to create multiple S3 buckets in an automated manner. Generally in such a case we would have to copy paste the S3 resource block. With Jinja2, this becomes a matter of adding a simple “for” loop:

$ vi template2.yaml

Copy the following contents to the file template2.yaml

—
AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for {{ env[‘ENVIRONMENT_NAME’] }}
Resources:
{% for i in range(1,3) %}
  S3Bucket{{ i }}:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: InstallFiles-{{ env[‘AWS_ACCOUNT_NUMBER’] }}-{{ i }}
{% endfor %}

Run the following command:

$ cat template2.yaml | python j2.py

The result of this command will be as follows:

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for Development
Resources:
  S3Bucket1:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: InstallFiles-1234567890-1
  S3Bucket2:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: InstallFiles-1234567890-2

As you can see the resulting template has two S3 Resource blocks. The output of the command can be redirected to another template file to be later used in stack creation.

I am sure you will appreciate the possibilities Jinja2 brings to enhance your CloudFormation templates. Do note that I have barely scratched the surface of this topic, and I highly recommend you to have a look at the Template Designer Documentation found at http://jinja.pocoo.org/docs/2.10/templates/ to explore more possibilities. If you are using Ansible, do note that Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. In this case you can get rid of the Python wrapper script mentioned in this article and use Ansible directly for template expansion.

Run an SSM Command against a set of EC2 Instances

A quick script today to run an SSM Command against a set of EC2 Instances. You can selectively target instance based on a TAG.

import boto3
import os
from time import sleep

lambda_func_name = os.getenv("AWS_LAMBDA_FUNCTION_NAME", "")

if lambda_func_name == "":  # We are not running in AWS
    boto3.setup_default_session(profile_name='<profile_name>')

ec2_client = boto3.client('ec2', region_name='ap-southeast-2')
ssm_client = boto3.client('ssm', region_name='ap-southeast-2')  # use region code in which you are working

sleep_duration = 2

def get_instances(os):

    instances = []

    # Check running or stopped instances
    response = ec2_client.describe_instances(
        Filters=[
            {
                'Name': 'instance-state-name',
                'Values': ['running']
            }
        ])

    # Iterate over instance(s)
    for r in response['Reservations']:
        for inst in r['Instances']:
            inst_id = inst['InstanceId']
            tags = inst['Tags']

            ins_tag = ""

            for tag in tags:
                if "OSFamily" in tag['Key']:
                    ins_tag = (tag['Value'])
                    break
                else:
                    ins_tag = "NA"

            if ins_tag == os:
                instances.append(inst_id)

    return instances

def ssm_run_command(instance_id, cmd, os_family):

    document = "AWS-RunPowerShellScript"

    if (os_family == "Linux"):
        document = "AWS-RunShellScript"

    response = ssm_client.send_command(
        InstanceIds=[
            instance_id  # use instance id on which you want to execute, even multiple is allowed
        ],
        DocumentName=document,
        Parameters={
            'commands': [
                cmd
            ]
        },
    )

    #print(response)

    sleep(sleep_duration)  # Seconds to wait for command to execute

    command_id = response['Command']['CommandId']
    output = ssm_client.get_command_invocation(CommandId = command_id, InstanceId = instance_id)

    return output['StandardOutputContent']

def check_time_zone_windows():

    os_family = "Windows"

    instances = get_instances(os_family)
    command = "[System.TimeZone]::CurrentTimeZone.StandardName"

    instances_with_wrong_tz = []

    for instance in instances:
        print("Checking instance: " + instance)
        time_zone = ssm_run_command(instance, command, os_family)
        #print(time_zone)

        if ( time_zone[0:3] != "AUS"):
            instances_with_wrong_tz.append(instance)

    result_str = ""

    if (len(instances_with_wrong_tz) > 0):

        result_str = "Following " + os_family + " instances have wrong Timezone:\n\n"

        for instance in instances_with_wrong_tz:
            result_str = result_str + instance + "\n\n"

    return result_str

def check_time_zone_linux():
    os_family = "Linux"

    instances = get_instances(os_family)
    command = 'date +"%Z"'

    instances_with_wrong_tz = []

    for instance in instances:
        print("Checking instance: " + instance)
        time_zone = ssm_run_command(instance, command, os_family)
        #print(time_zone)

        if (time_zone[0:2] != "AE"):
            instances_with_wrong_tz.append(instance)

    result_str = ""

    if (len(instances_with_wrong_tz) > 0):

        result_str = "Following " + os_family + " instances have wrong Timezone:\n\n"

        for instance in instances_with_wrong_tz:
            result_str = result_str + instance + "\n\n"

    return result_str

def lambda_handler(event, context):

    result_lin = check_time_zone_linux()
    result_win = check_time_zone_windows()

    result = result_lin + "\n\n" + result_win

    print(result)

if __name__ == "__main__":
    lambda_handler(0, 0)

Find missing Tags on EC2 Instances

Find missing Tags on EC2 Instances

# This script look for missing tags on EC2 instances
# Initialize 5 Environment variables tag1...tag5
# Usual tags to check can be as follows
#  - cpm backup
#  - monitor_site24x7
#  - Project
#  - Environment
#  - Owner

import boto3
import logging
import os

lambda_func_name = os.getenv("AWS_LAMBDA_FUNCTION_NAME", "")

if lambda_func_name == "":  # We are not running in AWS
    boto3.setup_default_session(profile_name='iconwater')

# setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)

# define the connection
ec2 = boto3.resource('ec2')

def send_alert(alert_data):
    topic_arn = os.getenv("TopicARN", "")

    if topic_arn == "":
        print("send_alert: Missing topic ARN. Returning without sending alert.")
        return

    subject = os.getenv('CustomerID', '') + " - Missing EC2 Instances Tag Check"
    message = "Missing EC2 Instances Tag Check Results: \n\n" + alert_data

    print("send_alert: *** Sending alert ***")
    print("send_alert: Message: {0}".format(message))

    client = boto3.client('sns')
    response = client.publish(TargetArn=topic_arn,
                              Message=message,
                              Subject=subject)

def find_instances_with_missing_tags(tag_to_check):
    result_str = ""

    client = boto3.client('ec2')

    client = boto3.client('ec2', region_name='ap-southeast-2')
    # Check running or stopped instances
    response = client.describe_instances(
        Filters=[
            {
                'Name': 'instance-state-name',
                'Values': ['running', 'stopped']
            }
        ])
    # Iterate over instance(s)
    for r in response['Reservations']:
        for inst in r['Instances']:
            inst_id = inst['InstanceId']
            tags = inst['Tags']
            # Check the Name tag
            for tag in tags:
                if 'Name' in tag['Key']:
                    ins_name = (tag['Value'])
                    break
                else:
                    ins_name = "{No-Name}"

            for tag in tags:
                if tag_to_check in tag['Key']:
                    ins_tag = (tag['Value'])
                    break
                else:
                    ins_tag = "NA"

            if ins_tag == "NA":
                s = "Tag '{}' missing for instance {} ({})\n\n".format(tag_to_check, ins_name, inst['InstanceId'])
                # print (s)
                result_str = result_str + s
                # else:
                #    print("Tag '{}' present for instance {} ({})".format(tag_to_check, ins_name, inst['InstanceId']))

    return result_str

def lambda_handler(event, context):
    tag1 = os.getenv("tag1", "")
    tag2 = os.getenv("tag2", "")
    tag3 = os.getenv("tag3", "")
    tag4 = os.getenv("tag4", "")
    tag5 = os.getenv("tag5", "")

    s = ""

    if tag1 != "": s = s + find_instances_with_missing_tags(tag1)
    if tag2 != "": s = s + find_instances_with_missing_tags(tag2)
    if tag3 != "": s = s + find_instances_with_missing_tags(tag3)
    if tag4 != "": s = s + find_instances_with_missing_tags(tag4)
    if tag5 != "": s = s + find_instances_with_missing_tags(tag5)

    if s != "":
        print(s)
        send_alert(s)

    return 0

if __name__ == "__main__":
    lambda_handler(0, 0)

Find EC2 instances not reporting to SSM (Python)

Find EC2 instances not reporting to SSM

# This script look for Instances not reporting/online in SSM

import boto3
import os

lambda_func_name = os.getenv("AWS_LAMBDA_FUNCTION_NAME", "")

if lambda_func_name == "":  # We are not running in AWS
    boto3.setup_default_session(profile_name='iconwater')

ec2 = boto3.resource('ec2')

instances_missing_in_ssm = []

def send_alert():
    alert_data = ""
    topic_arn = os.getenv("TopicARN", "")

    if topic_arn == "":
        print("send_alert: Missing topic ARN. Returning without sending alert.")
        return

    for m in instances_missing_in_ssm:
        alert_data  = alert_data + m +"\n\n"

    subject = os.getenv('CustomerID', '') + " - Missing Managed (SSM) Instances"
    message = "The following instances are offline or not reporting to SSM: \n\n" + alert_data

    print("send_alert: *** Sending alert ***")
    print("send_alert: Message: {0}".format(message))

    client = boto3.client('sns')
    response = client.publish(TargetArn=topic_arn,
                              Message=message,
                              Subject=subject)

def check_instance_ssm_status(instance_id):

    #print ("Checking {}".format(instance_id))

    client_ssm = boto3.client('ssm', region_name='ap-southeast-2')
    # Check running or stopped instances
    response = client_ssm.describe_instance_information(
        InstanceInformationFilterList=[
            {
                'key': 'InstanceIds',
                'valueSet': [
                    instance_id,
                ]
            },
        ]
    )

    #print (response)

    for r in response['InstanceInformationList']:
        #print(r)
        return r['PingStatus']

def find_instances_not_with_ssm():
    result_str = ""

    client_ec2 = boto3.client('ec2', region_name='ap-southeast-2')
    # Check running or stopped instances
    response = client_ec2.describe_instances(
        Filters=[
            {
                'Name': 'instance-state-name',
                'Values': ['running', 'stopped']
            }
        ])
    # Iterate over instance(s)
    for r in response['Reservations']:
        for inst in r['Instances']:
            inst_id = inst['InstanceId']
            tags = inst['Tags']
            # Check the Name tag
            for tag in tags:
                if 'Name' in tag['Key']:
                    ins_name = (tag['Value'])
                    break
                else:
                    ins_name = "{No-Name}"

            inst_id = inst['InstanceId']
            ssm_status = check_instance_ssm_status(inst_id)

            if ssm_status != 'Online':
                instances_missing_in_ssm.append(inst_id)

            #s = "{} ({})".format(ins_name, inst['InstanceId'])

    return len(instances_missing_in_ssm)

def lambda_handler(event, context):

    ret = find_instances_not_with_ssm()

    if (ret > 0):
        send_alert()

    return 0

if __name__ == "__main__":
    lambda_handler(0, 0)

Enable Cost Allocation Tags to differentiate project based billing

When running in an AWS public cloud environment, many times there is a need to dissect the billing across different projects for accounting and accrual purposes. AWS provides a mechanism to aggregate related platform costs using a feature known as Cost Allocation Tags. With this feature you can designate Tags on your AWS resources to track costs on a detailed level.

From the AWS Documentation:

Activating tags for cost allocation tells AWS that the associated cost data for these tags should be made available throughout the billing pipeline. Once activated, cost allocation tags can be used as a dimension of grouping and filtering in Cost Explorer, as well as for refining AWS budget criteria.

For example, to view cost allocation based on various project resources in your AWS account, you can tag these resources (EC2 instances, S3 buckets, etc) with a tag named “Project”. Next the Project tag can be activated as a Cost Allocation Tag. From then on AWS will include this tag in associated cost data to allow for filtering based in the tag in Cost Explorer reports.

Let’s walk through the steps of setting this up:

  1. Log in to your AWS Management Console
  2. Tag all the resources with a Tag Key as Project and Value as per your various projects. Understand that this may not be possible for every resource type.
  3. Navigate to My Billing Dashboard > Cost Allocation Tags
  4. Under User-Defined Cost Allocation Tags section, select the tag “Project” and click the “Activate” button.

Fig-1

Once a tag is activated it will take around 24 hours for billing data to appear under this tag.

Next, to view the costs under a project, do the following:

  1. Log in to your AWS Management Console
  2. Navigate to My Billing Dashboard > Cost Explorer
  3. Click “Launch Cost Explorer”
  4. On the right side of the page under Filters section, click the Tag filter and select the Project tag, then the Tag Value to filter cost by the Project

2018-01-05_150042

As you can see from the screenshot below, now we can see exactly how much each project is costing per day (or month, if selected)

2018-01-05_145028

Some important points to consider:

  • Cost allocation tagging is “managed” via the master billing account at the root of the AWS organization. If your account is part of an organization you will have to contact this account administrator to enable the cost allocation tags.2018-01-05_145000
  • The error message in the previous screenshot will always appear in tenancies not allocated the management permission.
  • Some resources notably bandwidth charges cannot be tagged and thus cannot be accounted under cost allocation tagging. A common pattern in such cases is to calculate percentage cost on each project and cost the unaccounted charges based on this percentage.