Sync a Local directory with S3

Sync a Local directory with S3

import os
import sys
import boto3
import hashlib
from datetime import datetime
from botocore.exceptions import ClientError

boto3.setup_default_session(profile_name='default')

if len(sys.argv) < 3:
    print("Not enough arguments.")
    print("Usage: python3 py-sync.py [SOURCE_DIRECTORY] [DESTINATION_BUCKET_NAME]")
    exit()

# Init objects
s3_client = boto3.client('s3')

SOURCE_DIR = sys.argv[1]
DESTINATION_BUCKET = sys.argv[2]

def check_file_exists(bucket, key):
    try:
        s3_client.head_object(Bucket=bucket, Key=key)
    except ClientError as e:
        return int(e.response['Error']['Code']) != 404
    return True

def md5(fname):
    hash_md5 = hashlib.md5()
    with open(fname, "rb") as f:
        for chunk in iter(lambda: f.read(4096), b""):
            hash_md5.update(chunk)
    return hash_md5.hexdigest()

print("Filename-Local", end=', ')
print("Filename-S3", end=', ')
print("File-Status", end=', ')
print("Action")

print("--------------", end=', ')
print("-----------", end=', ')
print("-----------", end=', ')
print("------")

for subdir, dirs, files in os.walk(SOURCE_DIR):
    for file in files:
        file_path_full = subdir + os.sep + file
        file_path_relative = file_path_full.replace(SOURCE_DIR + os.sep, '')
        file_key = file_path_relative.replace('\\', '/')

        print(file_path_full, end=', ')
        print('s3://' + DESTINATION_BUCKET + '/' + file_key, end=', ')

        if check_file_exists(DESTINATION_BUCKET, file_key) == False: # File doesnt exists, upload it
            s3_client.upload_file(file_path_full, DESTINATION_BUCKET, file_key)
            print("New", end=', ')
            print("Uploading")

        else:
            response = s3_client.head_object(Bucket=DESTINATION_BUCKET, Key=file_key)
            md5_s3 = response['ResponseMetadata']['HTTPHeaders'].get('etag')
            md5_s3 = md5_s3.replace('\"', '')
            md5_local = (md5(file_path_full))

            if md5_local != md5_s3:
                s3_client.upload_file(file_path_full, DESTINATION_BUCKET, file_key)
                print("Modified", end=', ')
                print("Uploading")

            else:
                print("No-Change", end=', ')
                print("Skipping")

Extend Linux host volume on AWS

1. First extend from AWS Console/CLI
2. Check File system:
    $ sudo file -s /dev/xvdf

# For ext3/ext4

$ df -h (Check what OS is seeing the size of volume/partition)
$ (optional for partition) sudo growpart /dev/xvdf 1 
$ lsblk
$ sudo resize2fs /dev/xvdf (sudo resize2fs /dev/xvdf1 for partition)

See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

Fix hostname in RHEL7

I found that even though I change the hostname on a RHEL host (AWS EC2 Instance), it changed after a reboot. Here is a fix:

$ sudo hostnamectl set-hostname --static abc.ot.cloud.example.com.au
$ sudo vi /etc/cloud/cloud.cfg

......Add to end of file
preserve_hostname: true

Add a new Volume to EC2 Host (Linux)

# From AWS Console, create a {200G} VOlume in correct AZ and attach to the running instance

Mount point example: /dev/xvdf

# Connect to Instance using ssh

# Run command to list block devices

$ lsblk

# Check if any filesystem exists on the device

# If it says 'data' then it is unformatted

$ sudo file -s /dev/xvdf

# Create a file system

$ sudo mkfs -t ext4 /dev/xvdf

# Mount the file system

$ sudo mount /dev/xvdf /u01

# Add to fstab so it persists on next reboot

# Device UUID is required (first command)

$ sudo file -s /dev/xvdf

$ cp /etc/fstab /etc/fstab.bak

$ vi /etc/fstab

UUID=524df55a-5d38-4380-9d53-e95856d3c0b1       /u01   ext4    defaults,nofail        0       2




 Example:

 UUID=a3828273-2053-41f9-97cc-f12e23436d16       /u01   ext3    defaults,nofail        0       2

Open (Listen On) port via PowerShell

To check network connectivity, you can open a port via PowerShell quickly and check if someone can connect to it:

    $endpoint = new-object System.Net.IPEndPoint ([system.net.ipaddress]::any, 8080)    
    $listener = new-object System.Net.Sockets.TcpListener $endpoint
    $listener.server.ReceiveTimeout = 3000
    $listener.start()    
    try {
    Write-Host "Listening on port $port, press CTRL+C to cancel"
    While ($true){
        if (!$listener.Pending())
        {
            Start-Sleep -Seconds 1; 
            continue; 
        }
        $client = $listener.AcceptTcpClient()
        $client.client.RemoteEndPoint | Add-Member -NotePropertyName DateTime -NotePropertyValue (get-date) -PassThru
        $client.close()
        }
    }
    catch {
        Write-Error $_          
    }
    finally{
            $listener.stop()
            Write-host "Listener Closed Safely"
    }

AWS Certified DevOps Engineer - Professional Completed

Congratulations! You have successfully completed the AWS Certified DevOps Engineer - Professional  exam and you are now AWS Certified. You can now use the AWS Certified DevOps Engineer - Professional  credential to gain recognition and visibility for your proven experience with AWS services.
...
Overall Score: 78%
Topic Level Scoring:
1.0  Continuous Delivery and Process Automation: 70%
2.0  Monitoring, Metrics, and Logging: 93%
3.0  Security, Governance, and Validation: 75%
4.0  High Availability and Elasticity: 91%

Fix Timezone

WINDOWS

Check Timezone

> [System.TimeZone]::CurrentTimeZone.StandardName

Fix Timezone

> C:\windows\system32\tzutil /s "AUS Eastern Standard Time"

LINUX

$ timedatectl set-timezone Australia/Canberra

or

$ sudo ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime