Slack Notification with CloudWatch Alarms & Lambda

ChatOps has emerged as one of the most effective techniques to implement DevOps. Hence, it will be great to receive notifications and infrastructure alerts into collaboration messaging platforms like Slack & HipChat.

AWS CloudWatch Alarms and SNS are a great mix to build a real-time notification system as SNS supports multiple endpoints (Email, HTTP, Lambda, SQS). Unfortunately SNS doesn’t support out of the box sending notifications to tools like Slack.

CloudWatch will trigger an alarm to send a message to an SNS topic if the monitoring data gets out of range. A Lambda function will be invoked in response of SNS receiving the message and will call the Slack API to post a message to Slack channel.

To get started, create an EC2 instance using the AWS Management Console or the AWS CLI:

Next, create a SNS topic:

Then, setup a CloudWatch alarm when the instance CPU utilization is higher than 40% and send notification to the SNS topic:

As a result:

To be able to post messages to slack channel, we need to create a Slack Incoming WebHook. Start by setting up an incoming webhook integration in your Slack workspace:

Note down the returned WebHook URL for upcoming part.

The Lambda handler function is written in Go, it takes as an argument the SNS message. Then, it parses it and queries the Slack API to post a message to the Slack channel configured in the previous section:

As Go is a compiled language, build the application and create a Lambda deployment package using the bash script below:

Once created, use the AWS CLI to deploy the function to Lambda. Make sure to override the Slack WebHook with your own:

Note: For non Gophers you can download the zip file directly from here.

From here, configure the invoking service for your function to be the SNS topic we created earlier:

Lambda Dashboard:

Let’s test it out, connect to your instance via SSH, then install stress which is a tool to do workload testing:

Issue the following command to generate some workloads on the CPU:

You should receive a Slack notification as below:

Note: You can go further and customize your Slack message.

AWS CloudWatch Monitoring with Grafana

Hybrid cloud is the new reality. Therefore, you will need a single tool, general purpose dashboard and graph composer for your global infrastructure. That’s where Grafana comes into play. Due to it’s pluggable architecture, you have access to many widgets and plugins to create interactive & user-friendly dashboards. In this post, I will walk you through on how to create dashboards in Grafana to monitor in real-time your EC2 instances based on metrics collected in AWS CloudWatch.

To get started, create an IAM role with the following IAM policy:

Launch an EC2 instance with the user-data script below. Make sure to associate to the instance the role we created earlier:

On the security group section, allow inbound traffic on port 3000 (Grafana Dashboard).

Once created, point your browser to the http://instance_dns_name:3000, you should see Grafana Login page (default credentials: admin/admin) :

Grafana ships with built in support for CloudWatch, so add a new data source:

Note: In case you are using an IAM Role (recommended), keep the other fields empty as above, otherwise, create a new file at ~/.aws/credentials with your own AWS Access Key & Secret key.

Create a new dashboard, and add new graph to the panel, select AWS/EC2 as namespace, CPUUtilization as metric, and the instance id of the instance you want to monitor in the dimension field:

That’s great !

Well, instead of hard-coding the InstanceId in the query, we can use a feature in Grafana called “Query Variables“. Create a new variable to hold list of AWS supported regions :

And, create a second variable to store list of instances ids per selected AWS region:

Now, go back to your graph and update the query as below:

That’s it, go ahead and create other widgets:

Note: You can download the dashboard from GitHub.

Now you’re ready to build interactive & dynamic dashboards for your CloudWatch metrics.

Publish Custom Metrics to AWS CloudWatch

AWS Autoscaling Groups can only scale in response to metrics in CloudWatch and most of the default metrics are not sufficient for predictive scaling. That’s why you need to publish your custom metrics to CloudWatch.

I was surfing the internet as usual, and I couldn’t find any post talking about how to publish custom metrics to AWS CloudWatch, and because I’m a Gopher, I got my hand dirty and I wrote my own script in Go.

You can publish your own metrics to CloudWatch using the AWS Go SDK:

To collect metrics about memory for example,  you can either parse output of command ‘free -m’ or use a third-party library like gopsutil:

The memoryMetrics object expose multiple metrics:

  • Memory used
  • Memory available
  • Buffers
  • Swap cached
  • Page Tables
  • etc

Each metric will be published with an InstanceID dimension. To get the instance id, you can query the meta-data:

Résultat de recherche d'images pour "simple right meme"

What if I’m not a Gopher ? well, don’t freak out, I built a simple CLI which doesn’t require any Go knowledge or dependencies to be installed (AWS CloudWatch Monitoring Scripts requires Perl dependencies) and moreover it’s cross-platform.

The CLI collects the following metrics:

  • Memory: utilization, used, available.
  • Swap: utilization, used, free.
  • Disk: utilization, used, available.
  • Network: packets in/out, bytes in/out, errors in/out.
  • Docker: memory & cpu per container.

The CLI have been tested on instances using the following AMIs (64-bit versions):

  • Amazon Linux
  • Amazon Linux 2
  • Ubuntu 16.04
  • Microsoft Windows Server

To get started, find the appropriate package for your instance and download it. For linux:

After you install the CLI, you may need to add the path to the executable file to your PATH variable. Then, issue the following command:

The command above will collect memory, swap, network & docker containers resource utilization on the current system.

Note: ensure an IAM role is associated with your instance, verify that it grants permission to perform cloudwatch:PutMetricData.

Now that we’ve written custom metrics to CloudWatch. You can view statistical graphs of your published metrics with the AWS Management Console:

You can create your own interactive and dynamic Dashboard based on these metrics:

Hope it helps ! The CLI is still in its early stages, so you are welcome to contribute to the project on GitHub.

Youtube to MP3 using S3, Lambda & Elastic Transcoder

In this tutorial, I will show you how to convert a Youtube video 📺 to a mp3 file 💿 using AWS Elastic Transcoder. How can we do that ?

We will create a Lambda function to consume events published by S3. For any video uploaded to a bucket, S3 will invoke our Lambda function by passing event information. AWS Lambda executes the function. As the function executes, it reads the S3 event data, logs some of the event information to Amazon CloudWatch. Then, kick off a transcoding job.

Let’s start, by creating an S3 bucket to store the inputs files (videos) and the outputs files (audio) :

Next, let’s define a Transcoder pipeline. A pipeline essentially defines a queue for future transcoding jobs. To create a pipeline, we need to specify the input bucket (where the videos will be).

Note: Copy down the Pipeline ID, we will need later on

Having created a pipeline, go to the AWS Management Console, navigate to Lambda service & click on “Create a Lambda Function“, add S3 as the event source for Lambda function:

I used the following Node.JS code:

The script does the following:

  • Extract the filename of the uploaded file from the event object
  • Create a Transcoder job and specify the required outputs
  • Launch the job

Note: you might notice in the function above is the use of presets (1351620000001-300040). It describes how to encode the given file (in this case mp3). The full list of available presets can be found in AWS Documentation.

Finally, set the pipeline id as an envrionment variable and select an IAM role with permission to access Elastic Transcoder:

Once created, upload a video file to the inputs bucket:

If everything went well, you should see the file in your outputs bucket:

S3 will trigger our Lambda function. It will then execute our function. and log the S3 object name to CloudWatch Logs:

After couple of seconds (or minutes depends on the size of the video ) , you should see a new MP3 file has been generated by Elastic Transcoder job inside the outputs directory in the S3 bucket:

Setup AWS Lambda with Scheduled Events

This post is part of my “Serverless” series. In this part, I will show you how to setup a Lambda function to send mails on a defined scheduled event from CloudWatch.

1 – Create Lambda Function

So start by cloning the project :

I implemented a simple Lambda function in NodeJS to send an email using MailGun library

Note: you could use another service like AWS SES or your own SMTP server

Then, create a zip file:

Next, we need to create an Execution Role for our function:

Execute the following Lambda CLI command to create a Lambda function. We need to provide the zip file, IAM role ARN we created earlier & set MAILGUN_API_KEY and MAILGUN_DOMAIN as parameters.

Note: the –runtime parameter uses Node.JS 6.10 but you can also specify Node.JS 4.3

Once created, AWS Lambda returns function configuration information as shown in the following example:

Now if we go back to AWS Lambda Dashboard we should see our function has been successfuly created:

2 – Configure a CloudWatch Rule

Create a new rule which will trigger our lambda function each 5 minutes:

Note: you can specify the value as a rate or in the cron expression format. All schedules use the UTC time zone, and the minimum precision for schedules is one minute

If you go back now to the Lambda Function Console and navigate to the Trigger tab, you should see the CloudWatch has been added:

After 5 minutes, CloudWatch will trigger the Lambda Function and you should get an email notification: