Infrastructure Cost Optimization with Lambda

Having multiple environments is important to build a continuous integration/deployment pipeline and be able to reproduce bugs in production with ease but this comes at price. In order to reduce cost of AWS infrastructure, instances which are running 24/7 unnecessarily (sandbox & staging environments) must be shut down outside of regular business hours. 

The figure below describes an automated process to schedule, stop and start instances to help cutting costs. The solution is a perfect example of using Serverless computing.

Note: full code is available on my GitHub.

2 Lambda functions will be created, they will scan all environments looking for a specific tag. The tag we use is named Environment. Instances without an Environment tag will not be affected:

The StartEnvironment function will query the StartInstances method with the list of instance ids returned by the previous function:

Similarly, the StopEnvironment function will query the StopInstances method:

Finally, both functions will post a message to Slack channel for real-time notification:

Now our functions are defined, let’s build the deployment packages (zip files) using the following Bash script:

The functions require an IAM role to be able to interact with EC2. The StartEnvironment function has to be able to describe and start EC2 instances:

The StopEnvironment function has to be able to describe and stop EC2 instances:

Finally, create an IAM role for each function and attach the above policies:

The script will output the ARN for each IAM role:

Before jumping to deployment part, we need to create a Slack WebHook to be able to post messages to Slack channel:

Next, use the following script to deploy your functions to AWS Lambda (make sure to replace the IAM roles, Slack WebHook token & the target environment):

Once deployed, if you sign in to AWS Management Console, navigate to Lambda Console, you should see both functions has been deployed successfully:

StartEnvironment:

StopEnvironment:

To further automate the process of invoking the Lambda function at the right time. AWS CloudWatch Scheduled Events will be used.

Create a new CloudWatch rule with the below cron expression (It will be invoked everyday at 9 AM):

And another rule to stop the environment at 6 PM:

Note: All times are GMT time.

Testing:

a – Stop Environment

Result:

b – Start Environment

Result:

The solution is easy to deploy and can help reduce operational costs.

Full code can be found on my GitHub. Make sure to drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Setup Raspberry PI 3 as AWS VPN Customer Gateway

In my previous article, I showed you how to use a VPN Software Solution like OpenVPN to create a secure tunnel to your AWS private resources. In this post, I will walk you through step by step on how to setup a secure bridge to your remote AWS VPC subnets from your home network with a Raspberry PI as a Customer Gateway.

To get started, find your Home Router public-facing IP address:

Next, sign in to AWS Management Console, navigate to VPC Dashboard and create a new VPN Customer Gateway:

Next, create a Virtual Private Gateway:

And attach it to the target VPC:

Then, create a VPN Connection with the Customer Gateway and the Virtual Private Gateway:

Note: Make sure to add your Home CIDR subnet to the Static IP Prefixes section.

Once the VPN Connection is created, click on “Tunnel Details” tab, you should see two tunnels for redundancy:

It may take a few minutes to create the VPN connection. When it’s ready, select the connection and choose “Download Configuration“, and open the configuration file and write down your Pre-shared-key and Tunnel IP:

I used a Raspberry PI 3 (Quand Core CPU 1.2 GHz, 1 GB RAM) with Raspbian, with SSH server enabled (default username & password: pi/raspberry), you can login and start manipulating the PI:

IPsec kernel support must be installed. Therefore, you must install openswan on your PI:

Update the /etc/ipsec.conf file as below:

Create a new IPsec Connection in /etc/ipsec.d/home-to-aws.conf :

  • left: Your Raspberry PI private IP.
  • leftid: Your Home Router public-facing IP.
  • leftsubnet: CIDR of your Home Subnet.
  • right: Virtual Private Gateway Tunnel IP.
  • rightsubnet: CIDR of your VPC.

Add the tunnel pre-shared key to /var/lib/openswan/ipsec.secrets.inc:

To enable the IPv4 forwarding, edit /etc/sysctl.conf, and ensure the following lines are uncommented:

Run sysctl -p to reload it. Then, restart IPsec service:

Verify if the service is running correctly:

If you go back to your AWS Dashboard, you should see the 1st tunnel status changed to UP:

Add a new route entry that forwards traffic to your home subnet through the VPN Gateway:

Note: Follow the same steps above to setup the 2nd tunnel for resiliency & high availablity of VPN connectivity.

Launch an EC2 instance in the private subnet to verify the VPN connection:

Allow SSH only from your Home Gateway CIDR:

Connect via SSH using the instance private ip address:

 

Congratulations ! you can now connect securely to your private EC2 instances.

To take it further and connect from other machines in the same Home Network, add a static route as described below:

1 – Windows

2 – Linux

3 – Mac OS X

Test it out:

 

AWS OpenVPN Access Server

Being able to access AWS resources directly in secure way can be very useful. To achieve this you can:

  • Setup a dedicated connection with AWS Direct Connect
  • Use a Network Appliance
  • Software Defined Private Network like OpenVPN

In this post, I will walk you through how to create an OpenVPN server on AWS, to connect securely to your VPC, Private Network resources and applications from any device anywhere.

To get started, sign in to your AWS Management Console and launch an EC2 instance from the OpenVPN Access Server AWS Marketplace offering:

For demo purpose, choose t2.micro:

Use the default settings with the exception of “Enable termination protection” as we dont want our VPN being terminated on accident:

Assign a new Security Group as below:

  • TCP – 22 : Remote access to the instance.
  • TCP – 443 : HTTPS, this is the interface used by users to log on to the VPN server and retrieve their keying and installation information.
  • TCP – 943 : OpenVPN Admin Web Dashboard.
  • UDP – 1194 : OpenVPN UDP Port.

To ensure our VPN instance Public IP address doesnt change if it’s stopped, assign to it an Elastic IP:

For simplicity, I added an A record in Route 53 which points to the instance Elastic IP:

Once the AMI is successfully launched, you will need to connect to the server via SSH using the DNS record:

On first time connecting, you will be prompted and asked to setup the OpenVPN server:

Setup a new password for the openvpn admin user:

Point your browser to https://openvpn.slowcoder.com, and login using openvpn credentials

Download the OpenVPN Connect Client, after your installation is complete, click on “Import” then “From server” :

Then type the OpenVN DNS name:

Enter your openvpn as the username and enter the same password as before and click on “connect“:

After you are connected, you should see a green check mark:

To verify the client is connected, login to OpenVPN Admin Dashboard on https://openvpn.slowcoder.com/admin :

Finally, create a simple web server instance in a private subnet to verify the VPN is working:

If you point your browser to the webserver private address, you should see a simple HTML page

Highly Available WordPress Blog

In this post you will learn about the easiest way to deploy a fault tolerant and scalable WordPress on AWS.

To get started, setup a Swarm cluster on AWS by following this tutorial Setup Docker Swarm on AWS using Ansible & Terraform:

Now your cluster is ready to use. You are ready to go !

WordPress stores some files on disk (plugins, themes, images …) which causes a problem if you want to use a fleet of EC2 instances to run your blog in case of high traffic:

That’s where AWS EFS (Elastic File System) comes into the play. The idea is to mount shared volumes using the NFS protocol in each host to synchronize files between all nodes in the cluster.

So create an Elastic File System, make sure to deploy it in the same VPC on which your Swarm cluster is created:

Once created, note the DNS name:

Now, mount Amazon EFS file systems via the NFSv4.1 protocol on each node:

We can verify the mount with a plain df -h command:

WordPress requires a relational database. Create an Amazon Aurora database:

Wait couple of minutes, then the database should be ready, copy the endpoint of database:

To deploy the stack, I’m using the following Docker Compose file:

In addition to wordpress container, Im using Traefik as reverse proxy to be able to scale out my blog easily with docker service scale command.

In your Manager node run the following command to deploy the stack:

At this point, you should have a clean install of WordPress running.

Fire up your browser and point it to manager public IP address, you will be greeted with the familiar WordPress setup page:

If you’re expecting a high traffic, you can easily scale the WP service using the command:

Verify Traefik Dashboard:

That’s how to build a scalable WordPress blog with no single points of failure.

Highly Available Docker Registry on AWS with Nexus

Have you ever wondered how you can build a highly available & resilient Docker Repository to store your Docker Images ?

Résultat de recherche d'images pour "you came to the right place meme"

In this post, we will setup an EC2 instance inside a Security Group and create an A record pointing to the server Elastic IP address as follow:

To provision the infrastructure, we will use Terraform as IaC (Infrastructure as Code) tool. The advantage of using this kind of tools is the ability to spin up a new environment quickly in different AWS region (or different IaaS provider) in case of incident (Disaster recovery).

Start by cloning the following Github repository:

Inside docker-registry folder, update the variables.tfvars with your own AWS credentials (make sure you have the right IAM policies).

I specified a shell script to be used as user_data when launching the instance. It will simply install the latest version of Docker CE and turn the instance to Docker Swarm Mode (to benefit from replication & high availability of Nexus container)

Note: Surely, you can use a Configuration Management Tools like Ansible or Chef to provision the server once created.

Then, issue the following command to create the infrastructure:

Once created, you should see the Elastic IP of your instance:

Connect to your instance via SSH:

Verify that the Docker Engine is running in Swarm Mode:

Check if Nexus service is running:

If you go back to your AWS Management Console. Then, navigate to Route53 Dashboard, you should see a new A record has been created which points to the instance IP address.

Point your favorite browser to the Nexus Dashboard URL (registry.slowcoder.com:8081). Login and create a Docker hosted registry as below:

Edit the /etc/docker/daemon.json file, it should have the following content:

Note: For production it’s highly recommended to secure your registry using a TLS certificate issued by a known CA.

Restart Docker for the changes to take effect:

Login to your registry with Nexus Credentials (admin/admin123):

In order to push a new image to the registry:

Verify that the image has been pushed to the remote repository:

To pull the Docker image:

Note: Sometimes you end up with many unused & dangling images that can quickly take significant amount of disk space:

You can either use the Nexus CLI tool or create a Nexus Task to cleanup old Docker Images:

Populate the form as below:

The task above will run everyday at midnight to purge unused docker images from “mlabouardy” registry.