Build a Serverless Production-Ready Blog

Are you tired of maintaining your CMS (WordPress, Drupal, etc) ? Paying expensive hosting fees? Fixing security issues everyday ?

Résultat de recherche d'images pour "what if i told you i have a solution"

I discovered not long ago a new blogging framework called Hexo which let you publish Markdown documents in the form of blog post. So as always I got my hands dirty and wrote this post to show you how to build a production-ready blog with Hexo and use the AWS S3 to make your blog Serverless and pay only per usage. Along the way, I will show you how to automate the deployment of new posts by setting up a CI/CD pipeline

To get started, Hexo requires Node.JS & Git to be installed. Once all requirements are installed, issue the following command to install Hexo CLI:

Next, create a new empty project:

Modify blog global settings in _config.yml file:

Start a local server with “hexo server“. By default, this is at http://localhost:4000. You’ll see Hexo’s pre-defined “Hello World” test post:

If you want to change the default theme, you just need to go here and find a new one you prefer.

I opt for Magnetic Theme as it includes many features:

  • Disqus and Facebook comments
  • Google Analytics
  • Cover image for posts and pages
  • Tags Support
  • Responsive Images
  • Image Gallery
  • Social Accounts configuration
  • Pagination

Clone the theme GitHub repository as below:

Then update your blog’s main _config.yml to set the theme to magnetic. Once done, restart the server:

Now you are almost done with your blog setup. It is time to write your first article. To generate a new article file, use the following command:

Now, sign in to AWS Management Console, navigate to S3 Dashboard and create an S3 Bucket or use the AWS CLI to create a new one:

Add the following policy to the S3 bucket to make all objects public by default:

Next, enable static website hosting on the S3 bucket:

In order to automate the process of deployment of the blog to S3 each time a new article is been published. We will setup a CI/CD pipeline using CircleCI.

Sign in to CircleCI using your GitHub account, then add the circle.yml file to your project:

Note: Make sure to set the AWS Access Key ID and Secret Access Key in your Project’s Settings page on CircleCI (s3:PutObject permission).

Now every time you push changes to your GitHub repo, CircleCI will automatically deploy the changes to S3. Here’s a passing build:

Finally, to make our blog user-friendly, we will setup a custom domain name in Route53 as below:

Note: You can go further and setup a CloudFront Distribution in front of the S3 bucket to optimize delivery of blog assets.

You can test your brand new blog now by typing the following adress: http://slowcoder.com :

Attach an IAM Role to an EC2 Instance with CloudFormation

CloudFormation allows you to manage your AWS infrastructure by defining it in code.

In this post, I will show you guys how to create an EC2 instance and attach an IAM role to it so you can access your S3 buckets.

First, you’ll need a template that specifies the resources that you want in your stack. For this step, you use a sample template that I already prepared:

The template creates a basic EC2 instance that uses an IAM Role with S3 List Policy. It also creates a security group which allows SSH access from anywhere.

Note: I used also the Parameters section to declare values that can be passed to the template when you create the stack.

Now we defined the template. Sign in to AWS Management Console then navigate to CloudFormation, and click on “Create Stack“. Upload the JSON file:

You would be asked to assign a name to this stack, and choose your EC2 specs configuration & SSH KeyPair:

Make sure to check the box “I ackownledge the AWS CloudFormation might create IAM resources” in order to create the IAM Policy & Role:

Once launched, you will get the following screen with launching process events:

After a while, you will get the CREATE_COMPLETE message in the status tab:

Once done, on the output tab, you should see how to connect via SSH to your instance:

If you point your terminal to the value shown in the output tab, you should be able to connect via SSH to server:

Let’s check if we can list the S3 buckets using the AWS CLI:

Awesome ! so we are able to list the buckets, but what if we want to create a new bucket:

It didn’t work, and it’s normal because the IAM Role attached to the instance doesn’t have enough permission (CreateBucket action).

Continuous Deployment with AWS CodeDeploy & Github

This post will walk you through how to AutoDeploy your application from Github using AWS CodeDeploy.

Let’s start first by creating 2 IAM roles we will use in this tutorial:

  • IAM role for CodeDeploy to talk to EC2 instances.
  • IAM role for EC2 to access S3.

1 – CodeDeployRole

Go to AWS IAM Console  then navigate to “Roles“, choose “Create New Role“, Select “CodeDeploy” and attach “AWSCodeDeployRole” policy:

2 – EC2S3Role

Create another IAM role, but this time choose EC2 as the trusted entity. Then, attach “AmazonS3ReadOnlyAccess” policy:

Now that we’ve created an IAM roles, let’s launch an EC2 instance which will be used by CodeDeploy to deploy our application.

3 – EC2 Instance

Launch a new EC2 instance with the IAM role we created in last section:

Next to User Data type the following script to install the AWS CodeDeploy Agent at boot time:

Note: make sure to allow HTTP traffic in the security group.

Once created, connect to the instance using the Public IP via SSH, and verify whether the CodeDeploy agent is running:

4 – Application

Add the appspec.yml file to the application to describe to AWS CodeDeploy how to manage the lifecycle of your application:

The BeforeInstall, will install apache server:

The AfterInstall will restart apache server

5 – Setup CodeDeploy

Go to AWS CodeDeploy and create a new application:

Select In-Place deployement (with downtime):

Click on “Skip“, because we already setup our EC2 instance:

The above will take you to the following page where you need to give a name to your application:

Select the EC2 instance and assign a name to the deployment group:

Select the CodeDeployRole we created in the first part of the tutorial:

Then click on “Deploy“:

Create a deployment, select Github as the data source:

Just select “Connect to GitHub“. Doing that will pop up a new browser window, take you to Github login where you will have to enter your username and password

After that come back to this page, and you should see something like below, just enter the remaining details and click “Deploy

This will take you to a page as follows:

If you point your browser to the EC2 public IP, you should see:

Now, let’s automate the deployment using Github Integrations.

6 – Continuous Deployment

Go to IAM Dashboard, and create a new policy which give access to register and create a new deployment from Github.

Next, create a new user and attach the policy we created before:

Note: copy to clipboard the user AWS ACCESS ID & AWS SECRET KEY. Will come handy later.

7 – Github Integration

Generate a new token to invoke CodeDeploy from Github:

Once the token is generated, copy the token and keep it. Then, add AWS CodeDeploy integration:

Fill the fields as below:

Finally, add Github Auto Deployment

Fill the form as below:

To test it out, let’s edit a file or commit a new file. You should see a new deployment on AWS CodeDeploy:

Youtube to MP3 using S3, Lambda & Elastic Transcoder

In this tutorial, I will show you how to convert a Youtube video 📺 to a mp3 file 💿 using AWS Elastic Transcoder. How can we do that ?

We will create a Lambda function to consume events published by S3. For any video uploaded to a bucket, S3 will invoke our Lambda function by passing event information. AWS Lambda executes the function. As the function executes, it reads the S3 event data, logs some of the event information to Amazon CloudWatch. Then, kick off a transcoding job.

Let’s start, by creating an S3 bucket to store the inputs files (videos) and the outputs files (audio) :

Next, let’s define a Transcoder pipeline. A pipeline essentially defines a queue for future transcoding jobs. To create a pipeline, we need to specify the input bucket (where the videos will be).

Note: Copy down the Pipeline ID, we will need later on

Having created a pipeline, go to the AWS Management Console, navigate to Lambda service & click on “Create a Lambda Function“, add S3 as the event source for Lambda function:

I used the following Node.JS code:

The script does the following:

  • Extract the filename of the uploaded file from the event object
  • Create a Transcoder job and specify the required outputs
  • Launch the job

Note: you might notice in the function above is the use of presets (1351620000001-300040). It describes how to encode the given file (in this case mp3). The full list of available presets can be found in AWS Documentation.

Finally, set the pipeline id as an envrionment variable and select an IAM role with permission to access Elastic Transcoder:

Once created, upload a video file to the inputs bucket:

If everything went well, you should see the file in your outputs bucket:

S3 will trigger our Lambda function. It will then execute our function. and log the S3 object name to CloudWatch Logs:

After couple of seconds (or minutes depends on the size of the video ) , you should see a new MP3 file has been generated by Elastic Transcoder job inside the outputs directory in the S3 bucket:

Create Front-End for Serverless RESTful API

In this post, we will build an UI for our Serverless REST API we built in the previous tutorial, so make sure to read it before following this part.

Note: make sure to enable CORS for the endpoint. In the API Gateway Console under Actions and Enable CORS:

The first step is to clone the project:

Head into the ui folder, and modify the js/app.js with your own API Gateway Invoke URL:

Once done, you are ready to create a new S3 bucket:

Copy all the files in the ui directory into the bucket:

Finally, turns website hosting on for your bucket:

After running this command all of our static files should appear in our S3 bucket:

Your bucket is configured for static website hosting, and you now have an S3 website url like this http://<bucket_name>.s3-website-us-east-1.amazonaws.com