Highly Available WordPress Blog

In this post you will learn about the easiest way to deploy a fault tolerant and scalable WordPress on AWS.

To get started, setup a Swarm cluster on AWS by following this tutorial Setup Docker Swarm on AWS using Ansible & Terraform:

Now your cluster is ready to use. You are ready to go !

WordPress stores some files on disk (plugins, themes, images …) which causes a problem if you want to use a fleet of EC2 instances to run your blog in case of high traffic:

That’s where AWS EFS (Elastic File System) comes into the play. The idea is to mount shared volumes using the NFS protocol in each host to synchronize files between all nodes in the cluster.

So create an Elastic File System, make sure to deploy it in the same VPC on which your Swarm cluster is created:

Once created, note the DNS name:

Now, mount Amazon EFS file systems via the NFSv4.1 protocol on each node:

We can verify the mount with a plain df -h command:

WordPress requires a relational database. Create an Amazon Aurora database:

Wait couple of minutes, then the database should be ready, copy the endpoint of database:

To deploy the stack, I’m using the following Docker Compose file:

In addition to wordpress container, Im using Traefik as reverse proxy to be able to scale out my blog easily with docker service scale command.

In your Manager node run the following command to deploy the stack:

At this point, you should have a clean install of WordPress running.

Fire up your browser and point it to manager public IP address, you will be greeted with the familiar WordPress setup page:

If you’re expecting a high traffic, you can easily scale the WP service using the command:

Verify Traefik Dashboard:

That’s how to build a scalable WordPress blog with no single points of failure.

Highly Available Docker Registry on AWS with Nexus

Have you ever wondered how you can build a highly available & resilient Docker Repository to store your Docker Images ?

Résultat de recherche d'images pour "you came to the right place meme"

In this post, we will setup an EC2 instance inside a Security Group and create an A record pointing to the server Elastic IP address as follow:

To provision the infrastructure, we will use Terraform as IaC (Infrastructure as Code) tool. The advantage of using this kind of tools is the ability to spin up a new environment quickly in different AWS region (or different IaaS provider) in case of incident (Disaster recovery).

Start by cloning the following Github repository:

Inside docker-registry folder, update the variables.tfvars with your own AWS credentials (make sure you have the right IAM policies).

I specified a shell script to be used as user_data when launching the instance. It will simply install the latest version of Docker CE and turn the instance to Docker Swarm Mode (to benefit from replication & high availability of Nexus container)

Note: Surely, you can use a Configuration Management Tools like Ansible or Chef to provision the server once created.

Then, issue the following command to create the infrastructure:

Once created, you should see the Elastic IP of your instance:

Connect to your instance via SSH:

Verify that the Docker Engine is running in Swarm Mode:

Check if Nexus service is running:

If you go back to your AWS Management Console. Then, navigate to Route53 Dashboard, you should see a new A record has been created which points to the instance IP address.

Point your favorite browser to the Nexus Dashboard URL (registry.slowcoder.com:8081). Login and create a Docker hosted registry as below:

Edit the /etc/docker/daemon.json file, it should have the following content:

Note: For production it’s highly recommended to secure your registry using a TLS certificate issued by a known CA.

Restart Docker for the changes to take effect:

Login to your registry with Nexus Credentials (admin/admin123):

In order to push a new image to the registry:

Verify that the image has been pushed to the remote repository:

To pull the Docker image:

Note: Sometimes you end up with many unused & dangling images that can quickly take significant amount of disk space:

You can either use the Nexus CLI tool or create a Nexus Task to cleanup old Docker Images:

Populate the form as below:

The task above will run everyday at midnight to purge unused docker images from “mlabouardy” registry.

Continuous Monitoring with TICK stack

Monitoring your system is required. It helps you detect any issues before they cause any major downtime that effect your customers and damage your business reputation. It helps you also to plan growth based on the real usage of your system. But collecting metrics from different data sources isn’t enough, you need to personalize your monitoring to meet your own business needs and define the right alerts so that any abnormal changes in the system will reported.

In this post, I will show you how to setup a resilient continuous monitoring platform with only open source projects & how to define an event alert to report changes in the system.

Clone the following Github repository:

1 – Terraform & AWS

In the tick-stack/terraform directory, update the variables.tfvars file with your own AWS credentials (make sure you have the right IAM policies) :

Issue the following command to download the AWS provider plugin:

Issue the following command to provision the infrastructure:

2 – Ansible & Docker

Update the inventory file with your instance DNS name:

Then, install the Ansible custom role:

Execute the Ansible Playbook:

Point your browser to http://DNS_NAME:8083, you should see InfluxDB Admin Dashboard:

Now, create an InfluxDB Data Source in Chronograf (http://DNS_NAME:8888):

Create a new Dashboard as follow:

You can create multiple graphs to visualize different types of metrics:

Note: For in depth details on how to create interactive & dynamic dashboards in Chronograf check my previous tutorial.

You need to elaborate on the data collected to do something like alerting. So make sure to enable Kapacitor:

Define a new alert to send a Slack notification if the CPU utilization is higher than 70%.

To test it out, we need to generate some workload. For this case, I used stress:

Stressing the CPU:

After few seconds, you should receive a Slack notification.

Attach an IAM Role to an EC2 Instance with CloudFormation

CloudFormation allows you to manage your AWS infrastructure by defining it in code.

In this post, I will show you guys how to create an EC2 instance and attach an IAM role to it so you can access your S3 buckets.

First, you’ll need a template that specifies the resources that you want in your stack. For this step, you use a sample template that I already prepared:

The template creates a basic EC2 instance that uses an IAM Role with S3 List Policy. It also creates a security group which allows SSH access from anywhere.

Note: I used also the Parameters section to declare values that can be passed to the template when you create the stack.

Now we defined the template. Sign in to AWS Management Console then navigate to CloudFormation, and click on “Create Stack“. Upload the JSON file:

You would be asked to assign a name to this stack, and choose your EC2 specs configuration & SSH KeyPair:

Make sure to check the box “I ackownledge the AWS CloudFormation might create IAM resources” in order to create the IAM Policy & Role:

Once launched, you will get the following screen with launching process events:

After a while, you will get the CREATE_COMPLETE message in the status tab:

Once done, on the output tab, you should see how to connect via SSH to your instance:

If you point your terminal to the value shown in the output tab, you should be able to connect via SSH to server:

Let’s check if we can list the S3 buckets using the AWS CLI:

Awesome ! so we are able to list the buckets, but what if we want to create a new bucket:

It didn’t work, and it’s normal because the IAM Role attached to the instance doesn’t have enough permission (CreateBucket action).

Continuous Deployment with AWS CodeDeploy & Github

This post will walk you through how to AutoDeploy your application from Github using AWS CodeDeploy.

Let’s start first by creating 2 IAM roles we will use in this tutorial:

  • IAM role for CodeDeploy to talk to EC2 instances.
  • IAM role for EC2 to access S3.

1 – CodeDeployRole

Go to AWS IAM Console  then navigate to “Roles“, choose “Create New Role“, Select “CodeDeploy” and attach “AWSCodeDeployRole” policy:

2 – EC2S3Role

Create another IAM role, but this time choose EC2 as the trusted entity. Then, attach “AmazonS3ReadOnlyAccess” policy:

Now that we’ve created an IAM roles, let’s launch an EC2 instance which will be used by CodeDeploy to deploy our application.

3 – EC2 Instance

Launch a new EC2 instance with the IAM role we created in last section:

Next to User Data type the following script to install the AWS CodeDeploy Agent at boot time:

Note: make sure to allow HTTP traffic in the security group.

Once created, connect to the instance using the Public IP via SSH, and verify whether the CodeDeploy agent is running:

4 – Application

Add the appspec.yml file to the application to describe to AWS CodeDeploy how to manage the lifecycle of your application:

The BeforeInstall, will install apache server:

The AfterInstall will restart apache server

5 – Setup CodeDeploy

Go to AWS CodeDeploy and create a new application:

Select In-Place deployement (with downtime):

Click on “Skip“, because we already setup our EC2 instance:

The above will take you to the following page where you need to give a name to your application:

Select the EC2 instance and assign a name to the deployment group:

Select the CodeDeployRole we created in the first part of the tutorial:

Then click on “Deploy“:

Create a deployment, select Github as the data source:

Just select “Connect to GitHub“. Doing that will pop up a new browser window, take you to Github login where you will have to enter your username and password

After that come back to this page, and you should see something like below, just enter the remaining details and click “Deploy

This will take you to a page as follows:

If you point your browser to the EC2 public IP, you should see:

Now, let’s automate the deployment using Github Integrations.

6 – Continuous Deployment

Go to IAM Dashboard, and create a new policy which give access to register and create a new deployment from Github.

Next, create a new user and attach the policy we created before:

Note: copy to clipboard the user AWS ACCESS ID & AWS SECRET KEY. Will come handy later.

7 – Github Integration

Generate a new token to invoke CodeDeploy from Github:

Once the token is generated, copy the token and keep it. Then, add AWS CodeDeploy integration:

Fill the fields as below:

Finally, add Github Auto Deployment

Fill the form as below:

To test it out, let’s edit a file or commit a new file. You should see a new deployment on AWS CodeDeploy: