Deploy a Swarm Cluster with Alexa

Serverless and Containers changed the way we leverage public clouds and how we write, deploy and maintain applications. A great way to combine the two paradigms is to build a voice assistant with Alexa based on Lambda functions – written in Go – to deploy a Docker Swarm cluster on AWS.

The figure below shows all components needed to deploy a production-ready Swarm cluster on AWS with Alexa.

Note: Full code is available on my GitHub.

A user will ask Amazon Echo to deploy a Swarm Cluster:

Echo will intercept the user’s voice command with built-in natural language understanding and speech recognition. Convey them to the Alexa service. A custom Alexa skill will convert the voice commands to intents:

The Alexa skill will trigger a Lambda function for intent fulfilment:

The Lambda Function will use the AWS EC2 API to deploy a fleet of EC2 instances from an AMI with Docker CE preinstalled (I used Packer to bake the AMI to reduce the cold-start of the instances). Then, push the cluster IP addresses to a SQS:

Next, the function will insert a new item to a DynamoDB table with the current state of the cluster:

Once the SQS received the message, a CloudWatch alarm (it monitors the ApproximateNumberOfMessagesVisible parameter) will be triggered and as a result it will publish a message to an SNS topic:

The SNS topic triggers a subscribed Lambda function:

The Lambda function will pull the queue for a new cluster and use the AWS System Manager API to provision a Swarm cluster on the fleet of EC2 instances created earlier:

For debugging, the function will output the Swarm Token to CloudWatch:

Finally, it will update the DynamoDB item state from Pending to Done and delete the message from SQS.

You can test your skill on your Amazon Echo, Echo Dot, or any Alexa device by saying, “Alexa, open Docker

At the end of the workflow described above, a Swarm cluster will be created:

At this point you can see your Swarm status by firing the following command as shown below:

Improvements & Limitations:

  • Lambda execution timeout if the cluster size is huge. You can use a Master Lambda function to spawn child Lambda.
  • CloudWatch & SNS parts can be deleted if SQS is supported as Lambda event source (AWS PLEAAASE !). DynamoDB streams or Kinesis streams cannot be used to notify Lambda as I wanted to create some kind of delay for the instances to be fully created before setting up the Swarm cluster. (maybe Simple Workflow Service ?)
  • Inject SNS before SQS. SNS can add the message to SQS and trigger the Lambda function. We won’t need CloudWatch Alarm.
  • You can improve the Skill by adding new custom intents to deploy Docker containers on the cluster or ask Alexa to deploy the cluster on a VPC

In-depth details about the skill can be found on my GitHub. Make sure to drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Become AWS Certified Developer with Alexa

Being an AWS Certified can boost your career (increasing your salary, finding better job or getting a promotion) and make your expertise and skills relevant. Hence, there’s no better way to prepare for your AWS Certified Developer Associate exam than getting your hands dirty and build a Serverless Quiz game with Alexa Skill and AWS Lambda.

Note: all the code is available in my GitHub.

1 – DynamoDB

To get started, create a DynamoDB Table using the AWS CLI:

I prepared in advance a list of questions for the following AWS services:

Next, import the JSON file to the DynamoDB table:

The insertToDynamoDB function uses the AWS DynamoDB SDK and PutItemRequest method to insert an item into the table:

    Run the main.go file by issuing the following command:

If you navigate to DynamoDB Dashboard, you should see that the list of questions has been successfully inserted:

2 – Alexa Skill

This is what ties it all together, by linking the phrases the user says to interact with the quiz to the intents.

For people who are not familiar with NLP. Alexa is based on an NLP Engine which is a system that analyses phrases (users messages) and returns an intent. Intents describe what a user want or want to do. It is the intention behind his message. Alexa can learn new intents by attributing examples of messages to an intent. Behind the scenes, the Engine will be able to predict the intent even if he had never seen it before.

So, sign up to Amazon Developer Console, and create a new custom Alexa Skill. Set an invocation name as follows:

Create a new Intent for starting the Quiz:

Add a new type “Slot” to store user choice:

Then, create another intent for AWS service choice:

And for user’s answer choice:

Save your interaction model. Then, you’re ready to configure your Alexa Skill.

3 – Lambda Function

The Lambda handler function is self explanatory, it maps each intent to a code snippet. To keep track of the user’s score. We use Alexa sessionAttributes property of the JSON response. The session attributes will then be passed back to you with the next request JSON inside the session’s object. The list of questions is retrieved from DynamoDB using AWS SDK and SSML (Speech Synthesis Markup Language) is used to make Alexa speaks a sentence ending in a question mark as a question or add pause in the speech:

Note: Full code is available in my GitHub.

Sign in to AWS Management Console, and create a new Lambda function in Go from scratch:

Assign the following IAM role to the function:

Generate the deployment package, and upload it to the Lambda Console and set the TABLE_NAME environment variable to the table name:

4 – Testing

Now that you have created the function and put its code in place, it’s time to specify how it gets called. We’ll do this by linking the Lambda ARN to Alexa Skill:

Once the information is in place, click Save Endpoints. You’re ready to start testing your new Alexa Skill !

To test, you need to login to Alexa Developer Console, and enable the “Test” switch on your skill from the “Test” Tab:

Or use an Alexa enabled device like Amazon Echo, by saying “Alexa, Open AWS Developer Quiz” :

Immutable AMI with Packer

When dealing with Hybrid or multi-cloud environments, you would need to have an identical machine images for multiple platforms from a single source configuration. That’s were Packer comes into play.

To get started, find the appropriate package for your system and download Packer:

With Packer installed, let’s just dive right into it and bake our AMI with a preinstalled Docker Engine in order to build a Swarm or Kubernetes cluster and avoid cold-start of node machines.

Packer is template-driven, templates are written in JSON format:

The template is divided into 3 sections:

  • variables: Custom variables that can be overriden during runtime by using the -var flag. In the above snippet, we’re specifying the AWS region.
  • builders: You can specify multiple builders depending on the target platforms (EC2, VMware, Google Cloud, Docker …).
  • provisioners: You can pass a shell script or use configuration managements tools like Ansible, Chef, Puppet or Salt to provision the AMI and install all required packages and softwares.

Packer will use an existing Amazon Linux Image “Gold Image” from the marketplace and install the latest Docker community edition using the following Bash script:

Note: You can avoid hardcoding the Gold Image ID in the template by using the source_ami_filter attribute.

Before we take the template and build an image from it, let’s validate the template by running:

Now that we have our template file and bash provisioning script ready to go, we can issue the following command to build our new AMI:

This will chew for a bit and finally output the AMI ID:

Next, create a new EC2 instance based on the AMI:

Then, connect to your instance via SSH and type the following command to verify Docker latest release is installed:

Simple right ? Well, you can go further and setup a CI/CD pipeline to build your AMIs on every push, recreate your EC2 instances with the new AMIs and rollback in case of failure.

Slack Notification with CloudWatch Alarms & Lambda

ChatOps has emerged as one of the most effective techniques to implement DevOps. Hence, it will be great to receive notifications and infrastructure alerts into collaboration messaging platforms like Slack & HipChat.

AWS CloudWatch Alarms and SNS are a great mix to build a real-time notification system as SNS supports multiple endpoints (Email, HTTP, Lambda, SQS). Unfortunately SNS doesn’t support out of the box sending notifications to tools like Slack.

CloudWatch will trigger an alarm to send a message to an SNS topic if the monitoring data gets out of range. A Lambda function will be invoked in response of SNS receiving the message and will call the Slack API to post a message to Slack channel.

To get started, create an EC2 instance using the AWS Management Console or the AWS CLI:

Next, create a SNS topic:

Then, setup a CloudWatch alarm when the instance CPU utilization is higher than 40% and send notification to the SNS topic:

As a result:

To be able to post messages to slack channel, we need to create a Slack Incoming WebHook. Start by setting up an incoming webhook integration in your Slack workspace:

Note down the returned WebHook URL for upcoming part.

The Lambda handler function is written in Go, it takes as an argument the SNS message. Then, it parses it and queries the Slack API to post a message to the Slack channel configured in the previous section:

As Go is a compiled language, build the application and create a Lambda deployment package using the bash script below:

Once created, use the AWS CLI to deploy the function to Lambda. Make sure to override the Slack WebHook with your own:

Note: For non Gophers you can download the zip file directly from here.

From here, configure the invoking service for your function to be the SNS topic we created earlier:

Lambda Dashboard:

Let’s test it out, connect to your instance via SSH, then install stress which is a tool to do workload testing:

Issue the following command to generate some workloads on the CPU:

You should receive a Slack notification as below:

Note: You can go further and customize your Slack message.

Build a Serverless Production-Ready Blog

Are you tired of maintaining your CMS (WordPress, Drupal, etc) ? Paying expensive hosting fees? Fixing security issues everyday ?

Résultat de recherche d'images pour "what if i told you i have a solution"

I discovered not long ago a new blogging framework called Hexo which let you publish Markdown documents in the form of blog post. So as always I got my hands dirty and wrote this post to show you how to build a production-ready blog with Hexo and use the AWS S3 to make your blog Serverless and pay only per usage. Along the way, I will show you how to automate the deployment of new posts by setting up a CI/CD pipeline

To get started, Hexo requires Node.JS & Git to be installed. Once all requirements are installed, issue the following command to install Hexo CLI:

Next, create a new empty project:

Modify blog global settings in _config.yml file:

Start a local server with “hexo server“. By default, this is at http://localhost:4000. You’ll see Hexo’s pre-defined “Hello World” test post:

If you want to change the default theme, you just need to go here and find a new one you prefer.

I opt for Magnetic Theme as it includes many features:

  • Disqus and Facebook comments
  • Google Analytics
  • Cover image for posts and pages
  • Tags Support
  • Responsive Images
  • Image Gallery
  • Social Accounts configuration
  • Pagination

Clone the theme GitHub repository as below:

Then update your blog’s main _config.yml to set the theme to magnetic. Once done, restart the server:

Now you are almost done with your blog setup. It is time to write your first article. To generate a new article file, use the following command:

Now, sign in to AWS Management Console, navigate to S3 Dashboard and create an S3 Bucket or use the AWS CLI to create a new one:

Add the following policy to the S3 bucket to make all objects public by default:

Next, enable static website hosting on the S3 bucket:

In order to automate the process of deployment of the blog to S3 each time a new article is been published. We will setup a CI/CD pipeline using CircleCI.

Sign in to CircleCI using your GitHub account, then add the circle.yml file to your project:

Note: Make sure to set the AWS Access Key ID and Secret Access Key in your Project’s Settings page on CircleCI (s3:PutObject permission).

Now every time you push changes to your GitHub repo, CircleCI will automatically deploy the changes to S3. Here’s a passing build:

Finally, to make our blog user-friendly, we will setup a custom domain name in Route53 as below:

Note: You can go further and setup a CloudFront Distribution in front of the S3 bucket to optimize delivery of blog assets.

You can test your brand new blog now by typing the following adress: http://slowcoder.com :