Deploy a Swarm Cluster with Alexa

Serverless and Containers changed the way we leverage public clouds and how we write, deploy and maintain applications. A great way to combine the two paradigms is to build a voice assistant with Alexa based on Lambda functions – written in Go – to deploy a Docker Swarm cluster on AWS.

The figure below shows all components needed to deploy a production-ready Swarm cluster on AWS with Alexa.

Note: Full code is available on my GitHub.

A user will ask Amazon Echo to deploy a Swarm Cluster:

Echo will intercept the user’s voice command with built-in natural language understanding and speech recognition. Convey them to the Alexa service. A custom Alexa skill will convert the voice commands to intents:

The Alexa skill will trigger a Lambda function for intent fulfilment:

The Lambda Function will use the AWS EC2 API to deploy a fleet of EC2 instances from an AMI with Docker CE preinstalled (I used Packer to bake the AMI to reduce the cold-start of the instances). Then, push the cluster IP addresses to a SQS:

Next, the function will insert a new item to a DynamoDB table with the current state of the cluster:

Once the SQS received the message, a CloudWatch alarm (it monitors the ApproximateNumberOfMessagesVisible parameter) will be triggered and as a result it will publish a message to an SNS topic:

The SNS topic triggers a subscribed Lambda function:

The Lambda function will pull the queue for a new cluster and use the AWS System Manager API to provision a Swarm cluster on the fleet of EC2 instances created earlier:

For debugging, the function will output the Swarm Token to CloudWatch:

Finally, it will update the DynamoDB item state from Pending to Done and delete the message from SQS.

You can test your skill on your Amazon Echo, Echo Dot, or any Alexa device by saying, “Alexa, open Docker

At the end of the workflow described above, a Swarm cluster will be created:

At this point you can see your Swarm status by firing the following command as shown below:

Improvements & Limitations:

  • Lambda execution timeout if the cluster size is huge. You can use a Master Lambda function to spawn child Lambda.
  • CloudWatch & SNS parts can be deleted if SQS is supported as Lambda event source (AWS PLEAAASE !). DynamoDB streams or Kinesis streams cannot be used to notify Lambda as I wanted to create some kind of delay for the instances to be fully created before setting up the Swarm cluster. (maybe Simple Workflow Service ?)
  • Inject SNS before SQS. SNS can add the message to SQS and trigger the Lambda function. We won’t need CloudWatch Alarm.
  • You can improve the Skill by adding new custom intents to deploy Docker containers on the cluster or ask Alexa to deploy the cluster on a VPC

In-depth details about the skill can be found on my GitHub. Make sure to drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Become AWS Certified Developer with Alexa

Being an AWS Certified can boost your career (increasing your salary, finding better job or getting a promotion) and make your expertise and skills relevant. Hence, there’s no better way to prepare for your AWS Certified Developer Associate exam than getting your hands dirty and build a Serverless Quiz game with Alexa Skill and AWS Lambda.

Note: all the code is available in my GitHub.

1 – DynamoDB

To get started, create a DynamoDB Table using the AWS CLI:

I prepared in advance a list of questions for the following AWS services:

Next, import the JSON file to the DynamoDB table:

The insertToDynamoDB function uses the AWS DynamoDB SDK and PutItemRequest method to insert an item into the table:

    Run the main.go file by issuing the following command:

If you navigate to DynamoDB Dashboard, you should see that the list of questions has been successfully inserted:

2 – Alexa Skill

This is what ties it all together, by linking the phrases the user says to interact with the quiz to the intents.

For people who are not familiar with NLP. Alexa is based on an NLP Engine which is a system that analyses phrases (users messages) and returns an intent. Intents describe what a user want or want to do. It is the intention behind his message. Alexa can learn new intents by attributing examples of messages to an intent. Behind the scenes, the Engine will be able to predict the intent even if he had never seen it before.

So, sign up to Amazon Developer Console, and create a new custom Alexa Skill. Set an invocation name as follows:

Create a new Intent for starting the Quiz:

Add a new type “Slot” to store user choice:

Then, create another intent for AWS service choice:

And for user’s answer choice:

Save your interaction model. Then, you’re ready to configure your Alexa Skill.

3 – Lambda Function

The Lambda handler function is self explanatory, it maps each intent to a code snippet. To keep track of the user’s score. We use Alexa sessionAttributes property of the JSON response. The session attributes will then be passed back to you with the next request JSON inside the session’s object. The list of questions is retrieved from DynamoDB using AWS SDK and SSML (Speech Synthesis Markup Language) is used to make Alexa speaks a sentence ending in a question mark as a question or add pause in the speech:

Note: Full code is available in my GitHub.

Sign in to AWS Management Console, and create a new Lambda function in Go from scratch:

Assign the following IAM role to the function:

Generate the deployment package, and upload it to the Lambda Console and set the TABLE_NAME environment variable to the table name:

4 – Testing

Now that you have created the function and put its code in place, it’s time to specify how it gets called. We’ll do this by linking the Lambda ARN to Alexa Skill:

Once the information is in place, click Save Endpoints. You’re ready to start testing your new Alexa Skill !

To test, you need to login to Alexa Developer Console, and enable the “Test” switch on your skill from the “Test” Tab:

Or use an Alexa enabled device like Amazon Echo, by saying “Alexa, Open AWS Developer Quiz” :

Komiser: AWS Environment Inspector

In order to build HA & Resilient applications in AWS, you need to assume that everything will fail. Therefore, you always design and deploy your application in multiple AZ & regions. So you end up with many unused AWS resources (Snapshots, ELB, EC2, Elastic IP, etc) that could cost you a fortune.

One pillar of AWS Well-Architected Framework is Cost optimization. That’s why you need to have a global overview of your AWS Infrastructure. Fortunately, AWS offers many fully-managed services like CloudWatch, CloudTrail, Trusted Advisor & AWS Config to help you achieve that. But, they require a deep understanding of AWS Platform and they are not straighforward.

That’s why I came up with Komiser a tool that simplifies the process by querying the AWS API to fetch information about almost all critical services of AWS like EC2, RDS, ELB, S3, Lambda … in real-time in a single Dashboard.

Note: To prevent excedding AWS API rate limit for requests, the response is cached in in-memory cache by default for 30 minutes.

Komiser supported AWS Services:

  • Compute:
    • Running/Stopped/Terminated EC2 instances
    • Current EC2 instances per region
    • EC2 instances per family type
    • Lambda Functions per runtime environment
    • Disassociated Elastic IP addresses
    • Total number of Key Pairs
    • Total number of Auto Scaling Groups
  • Network & Content Delivery:
    • Total number of VPCs
    • Total number of Network Access Control Lists
    • Total number of Security Groups
    • Total number of Route Tables
    • Total number of Internet Gateways
    • Total number of Nat Gateways
    • Elastic Load Balancers per family type (ELB, ALB, NLB)
  • Management Tools:
    • CloudWatch Alarms State
    • Billing Report (Up to 6 months)
  • Database:
    • DynamoDB Tables
    • DynamoDB Provisionned Throughput
    • RDS DB instances
  • Messaging:
    • SQS Queues
    • SNS Topics
  • Storage:
    • S3 Buckets
    • EBS Volumes
    • EBS Snapshots
  • Security Identity & Compliance:
    • IAM Roles
    • IAM Policies
    • IAM Groups
    • IAM Users

1 – Configuring Credentials

Komiser needs your AWS credentials to authenticate with AWS services. The CLI supports multiple methods of supporting these credentials. By default the CLI will source credentials automatically from its default credential chain. The common items in the credentials chain are the following:

  • Environment Credentials
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_DEFAULT_REGION
  • Shared Credentials file (~/.aws/credentials)
  • EC2 Instance Role Credentials

To get started, create a new IAM user, and assign to it this following IAM policy:

Next, generate a new AWS Access Key & Secret Key, then update ~/.aws/credentials file as below:

2 – Installation

2.1 – CLI

Find the appropriate package for your system and download it. For linux:

Note: The Komiser CLI is updated frequently with support for new AWS services. To see if you have the latest version, see the project Github repository.

After you install the Komiser CLI, you may need to add the path to the executable file to your PATH variable.

2.2 – Docker Image

Use the official Komiser Docker Image:

3 – Overview

Once installed, start the Komiser server:

If you point your favorite browser to http://localhost:3000, you should see Komiser Dashboard:

Hope it helps ! The CLI is still in its early stages, so you are welcome to contribute to the project on Github.

Serverless Golang API with AWS Lambda

AWS has announced few days ago, Go as supported language for AWS Lambda. So, I got my hand dirty and I made a Serverless Golang Lambda Function to discover new Movies by genres, I went even further and created a Frontend in top of my API with Angular 5.

Note: The full source code for this application can be found on GitHub 

To get started, install the dependencies below:

Create a main.go file with the following code:

The handler function takes as a parameter the movie genre ID then query the TMDb API  – Awesome free API for Movies and TV Shows – to get list of movies. I registred the handler using the lambda.Start() method.

To test our handler before deploying it, we can create a basic Unit Test:

Issue the following command to run the test:

Next, build an executable binary for Linux:

Zip it up into a deployment package:

Use the AWS CLI to create a new Lambda Function:

Note: substitute role flag with your own IAM role.

Sign in to the AWS Management Console, and navigate to Lambda Dashboard, you should see your lambda function has been created:

Set TMDb API KEY (Sign up for an account) as environment variable:

Create a new test event:

 Upon successful execution, view results in the console:

To provide the HTTPS frontend for our API, let’s add API Gateway as a trigger to the function:

Deployment:

Now, if you point your favorite browser to the Invoke URL:

Congratulations   you have created your first Lambda function in Go.

Let’s build a quick UI in top of the API with Angular 5. Create an Angular project from scratch using Angular CLI. Then, generate a new Service to calls the API Gateway URL:

In the main component iterate over the API response:

Note: the full code is in GitHub.

Generate production grade artifacts:

The build artifacts will be stored in the dist/ directory

Next, create an S3 bucket with AWS CLI:

Upload the build artifacts to the bucket:

Finally, turns website hosting on for your bucket:

If you point your browser to the S3 Bucket URL, you should be happy:

DialogFlow (API.AI) Golang SDK

DialogFlow (formerly API.AI) gives users new ways to interact with your bot by building engaging voice and text-based conversational interfaces powered by AI.

DialogFlow offers many SDKs in different programming languages:

But unfortunately, there’s no SDK for Golang

But dont be sad, I made an SDK to integrate DialogFlow with Golang:

Résultat de recherche d'images pour "thank you sir meme"

This library allows integrating agents from the DialogFlow natural language processing service with your Golang application.

Issue the following command to install the library:

The example below, display list of entities:

Note: for more details about the available methods, check the project Github repository.

For a real world example on how to use this library, check my previous tutorial, on how to create a Messenger Bot in Golang to show list of movies playing in cinema, and tv shows airing on TV: