Komiser EE:Setting Up Budget Alerts

Based on the latest survey and emails you were a lot to ask for Slack notifications.



We’re thrilled to announce the release of a new feature that allows you to setup daily spending alerts for AWS, GCP and DigitalOcean accounts.

Configuration

Head over to Your Apps and click the green “Create New App” button. A dialog like this will pop up:



Generate OAuth Token:



Add the below permission scopes and reinstall the app in the target workspace:



On Komiser EE Dashboard, navigate to “Dashboard” section:



Click on “Create Alert” button:



Fill out the form and click on create:



That’s it, you will receive everyday (at 9am GMT) a slack notification with the current monthly cost of each cloud account configured on your Komiser EE account:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Komiser Stays Open

Komiser EE is built on top of Komiser CE, that means that Komiser continues to evolve and will stay open source. Nothing changes! We are firm believers in open source, and Komiser will continue to be our main priority and a community-driven project.

Komiser:Multiple AWS Accounts Support

Releases keep rolling ! I’m thrilled to announce the release of Komiser:2.2.0 with support of multiple AWS accounts 🎊 🎉



But that’s not all, check the whole changelog to get an idea of the awesome work that has been done on this release. Lots of bugs have been fixed and we also have been working on adding amazing features.

Highlights

Komiser support multiple AWS accounts through named profiles that are stored in the config and credentials files. You can configure additional profiles by using aws configure with the --profile option, or by adding entries to the config and credentials files.

The following example shows a credentials file with 3 profiles (production, staging & sandbox accounts):

1
2
3
4
5
6
7
8
9
[Production]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[Staging]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[Sandbox]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>

To enable multiple AWS accounts feature, add the multiple option to Komiser:

1
komiser start --port 3000 --redis localhost:6379 --duration 30 --multiple

If you point your browser to http://localhost:3000, you should be able to see your accounts



You can now analyze and identify potential cost savings on unlimited AWS environments (Production, Staging, Sandbox, etc) on one single dashboard.

The versioned documentation can be found on https://docs.komiser.io.

Komiser is written in Golang and is MIT licensed — contributions are welcomed whether that means providing feedback or testing existing and new features.


https://komiser.io

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Highly Available Docker Registry on AWS with Nexus

Have you ever wondered how you can build a highly available & resilient Docker Repository to store your Docker Images ?



In this post, we will setup an EC2 instance inside a Security Group and create an A record pointing to the server Elastic IP address as follow:



To provision the infrastructure, we will use Terraform as IaC (Infrastructure as Code) tool. The advantage of using this kind of tools is the ability to spin up a new environment quickly in different AWS region (or different IaaS provider) in case of incident (Disaster recovery).

Start by cloning the following Github repository:

1
git clone https://github.com/mlabouardy/terraform-aws-labs.git

Inside docker-registry folder, update the variables.tfvars with your own AWS credentials (make sure you have the right IAM policies).

1
2
3
4
5
6
7
8
9
10
11
12
resource "aws_instance" "default" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
security_groups = ["${aws_security_group.default.name}"]

user_data = "${file("setup.sh")}"

tags {
Name = "registry"
}
}

I specified a shell script to be used as user_data when launching the instance. It will simply install the latest version of Docker CE and turn the instance to Docker Swarm Mode (to benefit from replication & high availability of Nexus container)

1
2
3
4
5
6
7
#!/bin/sh
yum update -y
yum install -y docker
service docker start
usermod -aG docker ec2-user
docker swarm init
docker service create --replicas 1 --name registry --publish 5000:5000 --publish 8081:8081 sonatype/nexus3:3.6.2

Note: Surely, you can use a Configuration Management Tools like Ansible or Chef to provision the server once created.

Then, issue the following command to create the infrastructure:

1
terraform apply -var-file=variables.tfvars

Once created, you should see the Elastic IP of your instance:



Connect to your instance via SSH:

1
ssh ec2-user@35.177.167.36

Verify that the Docker Engine is running in Swarm Mode:



Check if Nexus service is running:



If you go back to your AWS Management Console. Then, navigate to Route53 Dashboard, you should see a new A record has been created which points to the instance IP address.



Point your favorite browser to the Nexus Dashboard URL (registry.slowcoder.com:8081). Login and create a Docker hosted registry as below:



Edit the /etc/docker/daemon.json file, it should have the following content:

1
2
3
{
"insecure-registries" : ["registry.slowcoder.com:5000"]
}

Note: For production it’s highly recommended to secure your registry using a TLS certificate issued by a known CA.

Restart Docker for the changes to take effect:

1
service docker restart

Login to your registry with Nexus Credentials (admin/admin123):



In order to push a new image to the registry:

1
docker push registry.slowcoder.com:5000/mlabouardy/movies-api:1.0.0-beta


Verify that the image has been pushed to the remote repository:



To pull the Docker image:

1
docker pull registry.slowcoder.com:5000/mlabouardy/movies-api:1.0.0-beta


Note: Sometimes you end up with many unused & dangling images that can quickly take significant amount of disk space:



You can either use the Nexus CLI tool or create a Nexus Task to cleanup old Docker Images:



Populate the form as below:



The task above will run everyday at midnight to purge unused docker images from “mlabouardy” registry.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Hosting a Free Static Website on Google Cloud Storage

This guide walks you through setting up a free bucket to serve a static website through a custom domain name using Google Cloud Platform services.

Sign in to Google Cloud Platform, navigate to Cloud DNS service and create a new public DNS zone:



By default it will have a NS (Nameserver) and a SOA (Start of Authority) records:



Go to you domain registrar, in my case I purchased a domain name from GoDaddy (super cheap). Add the nameserver names that were listed in your NS record:



PS: It can take some time for the changes on GoDaddy to propagate through to Google Cloud DNS.

Next, verify you own the domain name using the Open Search Console. Many methods are available (HTML Meta data, Google Analytics, etc). The easiest one is DNS verification through a TXT record:



Add the TXT record to your DNS zone created earlier:



DNS changes might take some time to propagate:



Once you have verified domain, you can create a bucket with Cloud Storage under the verified domain name. The storage class should be “Multi-Regional” (geo redundant bucket, in case of outage) :



Copy the website static files to the bucket using the following command:

1
gsutil rsync -R . gs://www.serverlessmovies.com/

After the upload completes, your static files should be available on the bucket as follows:



Next, make the files publicly accessible by adding allUsers entity with Object Viewer role to the bucket permissions:



Once shared publicly, a link icon appears for each object in the public access column. You can click on this icon to get the URL for the object:



Verify that content is served from the bucket by requesting the index.html link in you browser:



Next, set the main page to be index.html from “Edit website configuration” section:



Now, we need to map our domain name with the bucket we created earlier. Create a CNAME record that points to c.storage.googleapis.com:



Point your browser to your domain name, your website should be served:



While our solution works like a charm, we can access our content through HTTP only (Google Cloud Storage only supports HTTP when using it through a CNAME record). In the next post, we will serve our content through a custom domain over SSL using a Content Delivery Network (CDN).

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Deploy a Docker Swarm cluster on GCP using Terraform in 8 steps

Kubernetes might be the ultimate choice when deploying heavy workloads on Google Cloud Platform. However, Docker Swarm has always been quite popular among developers who prefer fast deployments and simplicity— and among ops who are learning to get comfortable with an orchestrated environment.

In this post, we will walk through how to deploy a Docker Swarm cluster on GCP using Terraform from scratch. Let’s do it!



All the templates and playbooks used in this tutorial, can be found on my GitHub.

Get Started

To get started, sign in to your Google Cloud Platform console and create a service account private key from IAM:



Download the JSON file and store it in a secure folder.

For simplicity, I have divided my Swarm cluster components to multiple template files — each file is responsible for creating a specific Google Compute resource.

1. Setup your swarm managers

In this example, I have defined the Docker Swarm managers based on the CoreOS image:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
resource "google_compute_instance" "managers" {
count = "${var.swarm_managers}"
name = "manager"
machine_type = "${var.swarm_managers_instance_type}"
zone = "${var.zone}"

boot_disk {
initialize_params {
image = "${var.image_name}"
size = 100
}
}

metadata {
sshKeys = "${var.ssh_user}:${file(var.ssh_pub_key_file)}"
}

network_interface {
network = "${google_compute_network.swarm.name}"
access_config = {}
}
}

2. Setup your swarm workers

Similarly, a set of Swarm workers based on CoreOS image, and I have used the resource dependencies feature of Terraform to ensure the Swarm managers are deployed first. Please note the usage of depends_on keyword:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "google_compute_instance" "workers" {
count = "${var.swarm_workers}"
name = "worker${count.index + 1}"
machine_type = "${var.swarm_workers_instance_type}"
zone = "${var.zone}"

depends_on = ["google_compute_instance.managers"]

boot_disk {
initialize_params {
image = "${var.image_name}"
size = 100
}
}

metadata {
sshKeys = "${var.ssh_user}:${file(var.ssh_pub_key_file)}"
}

network_interface {
network = "${google_compute_network.swarm.name}"
access_config = {}
}
}

3. Define your network rules

Also, I have defined a network interface with a list of firewall rules that allows inbound traffic for cluster management, raft sync communications, docker overlay network traffic and ssh from anywhere:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "google_compute_firewall" "swarm" {
name = "swarm-firewall"
network = "${google_compute_network.swarm.name}"

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["22", "2377", "7946"]
}

allow {
protocol = "udp"
ports = ["7946", "4789"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "swarm" {
name = "swarm-network"
}

4. Automate your inventory with Terraform

In order to take automation to the next level, let’s use Terraform template_file data source to generate a dynamic Ansible inventory from Terraform state file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
data "template_file" "inventory" {
template = "${file("templates/inventory.tpl")}"

depends_on = [
"google_compute_instance.managers",
"google_compute_instance.workers",
]

vars {
managers = "${join("\n", google_compute_instance.managers.*.network_interface.0.access_config.0.nat_ip)}"
workers = "${join("\n", google_compute_instance.workers.*.network_interface.0.access_config.0.nat_ip)}"
}
}

resource "null_resource" "cmd" {
triggers {
template_rendered = "${data.template_file.inventory.rendered}"
}

provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ../ansible/inventory"
}
}

The template file has the following format, and it will be replaced by the Swarm managers and workers IP addresses at runtime:

1
2
3
4
5
[managers]
${managers}

[workers]
${workers}

Finally, let’s define Google Cloud to be the default provider:

1
2
3
4
5
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.project}"
region = "${var.region}"
}

5. Setup Ansible roles to provision instances

Once the templates are defined, we will use Ansible to provision our instances and turn them to a Swarm cluster. Hence, I created 3 Ansible roles:

  • python: as its name implies, it will install Python on the machine. CoreOS ships only with the basics, it’s a minimal linux distribution without much except tools centered around running containers.
  • swarm-init: execute the docker swarm init command on the first manager and store the swarm join tokens.
  • swarm-join: join the node to the cluster using the token generated previously.

By now, your main playbook will look something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
- name: Install Python
hosts: managers:workers
gather_facts: False
roles:
- python

- name: Init Swarm cluster
hosts: managers
gather_facts: False
roles:
- swarm-init

- name: Join Swarm cluster
hosts: workers
gather_facts: False
vars:
token: ""
manager: ""
roles:
- swarm-join

6. Test your configuration

To test it out, open a new terminal session and issue terraform init command to download the google provider:



Create an execution plan (dry run) with the terraform plan command. It shows you things that will be created in advance, which is good for debugging and ensuring that you’re not doing anything wrong, as shown in the next screenshot:



You will be able to examine Terraform’s execution plan before you deploy it to GCP. When you’re ready, go ahead and apply the changes by issuing terraform apply command.

The following output will be displayed (some parts were cropped for brevity):



If you head back to Compute Engine Dashboard, your instances should be successfully created:



7. Create your Swarm cluster with Ansible

Now our instances are created, we need to turn them to a Swarm cluster with Ansible. Issue the following command:

1
ansible-playbook -i inventory main.yml


Next, SSH to the manager instance using it’s public IP address:



If you run docker node ls, you will get a list of nodes in the swarm:



Deploy the visualizer service with the following command:

1
2
3
docker service create --name=visualizer --publish=8080:8080/tcp \
--constraint=node.role==manager --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer


8. Update your network rules

The service is exposed on port 8080 of the instance. Therefore, we need to allow inbound traffic on that port, you can use Terraform to update the existing firewall rules:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "google_compute_firewall" "swarm" {
name = "swarm-firewall"
network = "${google_compute_network.swarm.name}"

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["22", "2377", "7946", "8080"]
}

allow {
protocol = "udp"
ports = ["7946", "4789"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "swarm" {
name = "swarm-network"
}

Run terraform apply again to create the new ingress rule, it will detect the changes and ask you to confirm it:



If you point your favorite browser to your http://instance_ip:8080, the following dashboard will be displayed which confirms our cluster is fully setup:



In an upcoming post, we will see how we can take this further by creating a production-ready Swarm cluster on GCP inside a VPC — and how to provision Swarm managers and workers on-demand using instance groups based on increases or decreases in load.

We will also learn how to bake a CoreOS machine image with Python preinstalled with Packer, and how to use Terraform and Jenkins to automate the infrastructure deployment!

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Deploy Private Docker Registry on GCP with Nexus, Terraform and Packer

In this post, I will walk you through how to deploy Sonatype Nexus OSS 3 on Google Cloud Platform and how to create a private Docker hosted repository to store your Docker images and other build artifacts (maven, npm and pypi, etc). To achieve this, we need to bake our machine image using Packer to create a gold image with Nexus preinstalled and configured. Terraform will be used to deploy a Google compute instance based on the baked image. The following schema describes the build workflow:



PS : All the templates used in this tutorial, can be found on my GitHub.

To get started, we need to create the machine image to be used with Google Compute Engine (GCE). Packer will create a temporary instance based on the CentOS image and use a shell script to provision the instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
"variables" : {
"zone" : "YOUR ZONE",
"project" : "YOUR PROJECT ID",
"source_image" : "centos-7-v20181210",
"ssh_username" : "packer",
"credentials_path" : "PATH/account.json"
},
"builders" : [
{
"type": "googlecompute",
"account_file": "{{user `credentials_path`}}",
"project_id": "{{user `project`}}",
"source_image": "{{user `source_image`}}",
"ssh_username": "{{user `ssh_username`}}",
"zone": "{{user `zone`}}",
"image_name" : "nexus-v3-14-0-04"
}
],
"provisioners" : [
{
"type" : "file",
"source" : "./nexus.rc",
"destination" : "/tmp/nexus.rc"
},
{
"type" : "file",
"source" : "./repository.json",
"destination" : "/tmp/repository.json"
},
{
"type" : "shell",
"script" : "./setup.sh",
"execute_command" : "sudo -E -S sh '{{ .Path }}'"
}
]
}

The shell script, will install the latest stable version of Nexus OSS based on their official documentation and wait for the service to be up and running, then it will use the Scripting API to post a groovy script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#!/bin/bash

NEXUS_USERNAME="admin"
NEXUS_PASSWORD="admin123"

echo "Install Java JDK 8"
yum update -y
yum install -y java-1.8.0-openjdk wget

echo "Install Nexus OSS"
mkdir /opt/nexus
cd /opt/nexus
wget https://download.sonatype.com/nexus/3/latest-unix.tar.gz
tar -xvf latest-unix.tar.gz
rm latest-unix.tar.gz
mv nexus-3.14.0-04 nexus
useradd nexus
chown -R nexus:nexus /opt/nexus/
ln -s /opt/nexus/nexus/bin/nexus /etc/init.d/nexus
cd /etc/init.d
chkconfig --add nexus
chkconfig --levels 345 nexus on
mv /tmp/nexus.rc /opt/nexus/nexus/bin/nexus.rc
service nexus restart

until $(curl --output /dev/null --silent --head --fail http://localhost:8081); do
printf '.'
sleep 2
done


echo "Upload Groovy Script"
curl -v -X POST -u $NEXUS_USERNAME:$NEXUS_PASSWORD --header "Content-Type: application/json" 'http://localhost:8081/service/rest/v1/script' -d @/tmp/repository.json

echo "Execute it"
curl -v -X POST -u $NEXUS_USERNAME:$NEXUS_PASSWORD --header "Content-Type: text/plain" 'http://localhost:8081/service/rest/v1/script/docker-repository/run'

The script will create a Docker private registry listening on port 5000:

1
2
3
4
import org.sonatype.nexus.blobstore.api.BlobStoreManager; 
import org.sonatype.nexus.repository.storage.WritePolicy;

repository.createDockerHosted('mlabouardy', 5000, 443, BlobStoreManager.DEFAULT_BLOBSTORE_NAME, true, true, WritePolicy.ALLOW)

Once the template files are defined, issue packer build command to bake our machine image:



If you head back to Images section from Compute Engine dashboard, a new image called nexus should be created:



Now we are ready to deploy Nexus, we will create a Nexus server based on the machine image we baked with Packer. The template file is self-explanatory, it creates a set of firewall rules to allow inbound traffic on port 8081 (Nexus GUI) and 22 (SSH) from anywhere, and creates a google compute instance based on the Nexus image:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.project}"
region = "${var.region}"
}

resource "google_compute_firewall" "nexus" {
name = "nexus-firewall"
network = "${google_compute_network.nexus.name}"

allow {
protocol = "tcp"
ports = ["22", "8081"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "nexus" {
name = "nexus-network"
}

resource "google_compute_instance" "nexus" {
name = "nexus"
machine_type = "${var.instance_type}"
zone = "${var.zone}"

boot_disk {
initialize_params {
image = "${var.image_name}"
size = 100
}
}

metadata {
sshKeys = "${var.ssh_user}:${file(var.ssh_pub_key_file)}"
}

network_interface {
network = "${google_compute_network.nexus.name}"
access_config = {}
}
}

On the terminal, run the terraform init command to download and install the Google provider, shown as follows:



Create an execution plan (dry run) with the terraform plan command. It shows you things that will be created in advance, which is good for debugging and ensuring that you’re not doing anything wrong, as shown in the next screenshot:



When you’re ready, go ahead and apply the changes by issuing terraform apply:



Terraform will create the needed resources and display the public ip address of the nexus instance on the output section. Jump back to GCP Console, your nexus instance should be created:



If you point your favorite browser to http://instance_ip:8081, you should see the Sonatype Nexus Repository Manager interface:



Click the “Sign in” button in the upper right corner and use the username “admin” and the password “admin123”. Then, click on the cogwheel to go to the server administration and configuration section. Navigate to “Repositories”, our private Docker repository should be created as follows:



The docker repository is published as expected on port 5000:



Hence, we need to allow inbound traffic on that port, so update the firewall rules accordingly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
resource "google_compute_firewall" "nexus" {
name = "nexus-firewall"
network = "${google_compute_network.nexus.name}"

allow {
protocol = "tcp"
ports = ["22", "8081", "5000"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "nexus" {
name = "nexus-network"
}

Issue terrafrom apply command to apply the changes:



Your private docker registry is ready to work at instance_ip:5000, let’s test it by pushing a docker image.

Since we have exposed the private Docker registry on a plain HTTP endpoint, we need to configure the Docker daemon that will act as client to the private Docker registry as to allow for insecure connections.



  • On Windows or Mac OS X: Click on the Docker icon in the tray to open Preferences. Click on the Daemon tab and add the IP address on which the Nexus GUI is exposed along with the port number 5000 in Insecure registries section. Don’t forget to Apply & Restart for the changes to take effect and you’re ready to go.
  • Other OS: Follow the official guide.

You should now be able to log in to your private Docker registry using the following command:



And push your docker images to the registry with the docker push command:



If you head back to Nexus Dashboard, your docker image should be stored with the latest tag:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Immutable AMI with Packer

When dealing with Hybrid or multi-cloud environments, you would need to have an identical machine images for multiple platforms from a single source configuration. That’s were Packer comes into play.

To get started, find the appropriate package for your system and download Packer:

1
2
curl https://releases.hashicorp.com/packer/1.2.2/packer_1.2.2_darwin_amd64.zip -O /usr/local/bin/packer
chmod +x packer

With Packer installed, let’s just dive right into it and bake our AMI with a preinstalled Docker Engine in order to build a Swarm or Kubernetes cluster and avoid cold-start of node machines.

Packer is template-driven, templates are written in JSON format:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
"variables" : {
"region" : "us-east-1"
},
"builders" : [
{
"type" : "amazon-ebs",
"profile" : "default",
"region" : "{{user `region`}}",
"instance_type" : "t2.micro",
"source_ami" : "ami-1853ac65",
"ssh_username" : "ec2-user",
"ami_name" : "docker-17.12.1-ce",
"ami_description" : "Amazon Linux Image with Docker-CE",
"run_tags" : {
"Name" : "packer-builder-docker",
"Tool" : "Packer",
"Author" : "mlabouardy"
}
}
],
"provisioners" : [
{
"type" : "shell",
"script" : "./setup.sh"
}
]
}

The template is divided into 3 sections:

  • variables: Custom variables that can be overriden during runtime by using the -var flag. In the above snippet, we’re specifying the AWS region.
  • builders: You can specify multiple builders depending on the target platforms (EC2, VMware, Google Cloud, Docker …).
  • provisioners: You can pass a shell script or use configuration managements tools like Ansible, Chef, Puppet or Salt to provision the AMI and install all required packages and softwares.

Packer will use an existing Amazon Linux Image “Gold Image” from the marketplace and install the latest Docker community edition using the following Bash script:

1
2
3
4
5
6
#/bin/sh

sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo usermod -aG docker ec2-user

Note: You can avoid hardcoding the Gold Image ID in the template by using the source_ami_filter attribute.

Before we take the template and build an image from it, let’s validate the template by running:

1
packer validate ami.json

Now that we have our template file and bash provisioning script ready to go, we can issue the following command to build our new AMI:

1
packer build ami.json


This will chew for a bit and finally output the AMI ID:



Next, create a new EC2 instance based on the AMI:

1
2
3
4
5
aws ec2 run-instances --image-id ami-3fbc1940
--count 1 --instance-type t2.micro
--key-name KEY_NAME --security-group-ids SG_ID
--region us-east-1
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=demo}]'

Then, connect to your instance via SSH and type the following command to verify Docker latest release is installed:



Simple right ? Well, you can go further and setup a CI/CD pipeline to build your AMIs on every push, recreate your EC2 instances with the new AMIs and rollback in case of failure.



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Komiser:AWS Environment Inspector

In order to build HA & Resilient applications in AWS, you need to assume that everything will fail. Therefore, you always design and deploy your application in multiple AZ & regions. So you end up with many unused AWS resources (Snapshots, ELB, EC2, Elastic IP, etc) that could cost you a fortune.

One pillar of AWS Well-Architected Framework is Cost optimization. That’s why you need to have a global overview of your AWS Infrastructure. Fortunately, AWS offers many fully-managed services like CloudWatch, CloudTrail, Trusted Advisor & AWS Config to help you achieve that. But, they require a deep understanding of AWS Platform and they are not straighforward.



That’s why I came up with Komiser a tool that simplifies the process by querying the AWS API to fetch information about almost all critical services of AWS like EC2, RDS, ELB, S3, Lambda … in real-time in a single Dashboard.

Note: To prevent excedding AWS API rate limit for requests, the response is cached in in-memory cache by default for 30 minutes.

Komiser supported AWS Services:



  • Compute:
    • Running/Stopped/Terminated EC2 instances
    • Current EC2 instances per region
    • EC2 instances per family type
    • Lambda Functions per runtime environment
    • Disassociated Elastic IP addresses
    • Total number of Key Pairs
    • Total number of Auto Scaling Groups
  • Network & Content Delivery:
    • Total number of VPCs
    • Total number of Network Access Control Lists
    • Total number of Security Groups
    • Total number of Route Tables
    • Total number of Internet Gateways
    • Total number of Nat Gateways
    • Elastic Load Balancers per family type (ELB, ALB, NLB)
  • Management Tools:
    • CloudWatch Alarms State
    • Billing Report (Up to 6 months)
  • Database:
    • DynamoDB Tables
    • DynamoDB Provisionned Throughput
    • RDS DB instances
  • Messaging:
    • SQS Queues
    • SNS Topics
  • Storage:
    • S3 Buckets
    • EBS Volumes
    • EBS Snapshots
  • Security Identity & Compliance:
    • IAM Roles
    • IAM Policies
    • IAM Groups
    • IAM Users

1 – Configuring Credentials

Komiser needs your AWS credentials to authenticate with AWS services. The CLI supports multiple methods of supporting these credentials. By default the CLI will source credentials automatically from its default credential chain. The common items in the credentials chain are the following:

  • Environment Credentials
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_DEFAULT_REGION
  • Shared Credentials file (~/.aws/credentials)
  • EC2 Instance Role Credentials

To get started, create a new IAM user, and assign to it this following IAM policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"ec2:DescribeRegions",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"ec2:DescribeVpcs",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNatGateways",
"ec2:DescribeRouteTables",
"ec2:DescribeSnapshots",
"ec2:DescribeNetworkAcls",
"ec2:DescribeKeyPairs",
"ec2:DescribeInternetGateways"
],
"Resource": "*"
},
{
"Sid": "2",
"Effect": "Allow",
"Action": [
"ec2:DescribeAddresses",
"ec2:DescribeSnapshots",
"elasticloadbalancing:DescribeLoadBalancers",
"autoscaling:DescribeAutoScalingGroups",
"ce:GetCostAndUsage",
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Sid": "3",
"Effect": "Allow",
"Action": [
"lambda:ListFunctions",
"dynamodb:ListTables",
"dynamodb:DescribeTable",
"rds:DescribeDBInstances",
"cloudwatch:DescribeAlarms",
"cloudfront:ListDistributions"
],
"Resource": "*"
},
{
"Sid": "4",
"Effect": "Allow",
"Action": [
"sqs:ListQueues",
"route53:ListHostedZones",
"sns:ListTopics",
"iam:ListGroups",
"iam:ListRoles",
"iam:ListPolicies",
"iam:ListUsers"
],
"Resource": "*"
}
]
}

Next, generate a new AWS Access Key & Secret Key, then update ~/.aws/credentials file as below:

1
2
3
4
[default]
aws_access_key_id = AWS ACCESS KEY ID
aws_secret_access_key = AWS SECRET ACCESS KEY
region = us-east-1

2 – Installation

2.1 – CLI

Find the appropriate package for your system and download it. For linux:

1
2
wget https://s3.us-east-1.amazonaws.com/komiser/1.0.0/linux/komiser
chmod +x komiser

Note: The Komiser CLI is updated frequently with support for new AWS services. To see if you have the latest version, see the project Github repository.

After you install the Komiser CLI, you may need to add the path to the executable file to your PATH variable.

2.2 – Docker Image

Use the official Komiser Docker Image:

1
docker run -d -p 3000:3000 -e AWS_ACCESS_KEY_ID="" -e AWS_SECRET_ACCESS_KEY="" -e AWS_DEFAULT_REGION="" --name komiser mlabouardy/komiser

3 – Overview

Once installed, start the Komiser server:

1
komiser start --port 3000 --duration 30

If you point your favorite browser to http://localhost:3000, you should see Komiser Dashboard:



Hope it helps ! The CLI is still in its early stages, so you are welcome to contribute to the project on GitHub.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Butler CLI:Export/Import Jenkins Plugins & Jobs

Not long ago, I had to migrate Jenkins jobs from an old server to a new one. That’s where StackOverflow comes into the play, below the most voted answers I found:

  • Jenkins CLI
  • Copy the jobs directory
  • Jenkins Remote API
  • Jenkins Job Import Plugin

In spite of their advantages, those solutions comes with their downsides especially if you have a large number of jobs to move or no access root to the server. But, guess what ? I didn’t stop there. I have came up with a CLI to make your life easier and export/import not only Jenkins jobs but also plugins like a boss.

To get started, find the appropriate package for your system and download it. For linux:

1
2
3
wget https://s3.us-east-1.amazonaws.com/butlercli/1.0.0/linux/butler
chmod +x butler
mv butler /usr/local/bin/

Note: For Windows make sure that butler binary is available on the PATH. This page contains instructions for setting the PATH on Windows.

Once done, verify the installation worked, by opening a new terminal session and checking if butler is available :

1
butler help


1 – Plugins Management

To export Jenkins jobs, you need to provide the URL of the source Jenkins instance:

1
butler plugins export --server localhost:8080 --username admin --password admin


As shown above, butler will dump a list of plugins installed to stdout and a new file plugins.txt will be generated, with list of installed Jenkins plugins with name and version pairs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
bouncycastle-api@2.16.2
structs@1.10
script-security@1.39
scm-api@2.2.6
workflow-step-api@2.14
workflow-api@2.24
workflow-support@2.16
durable-task@1.17
workflow-durable-task-step@2.17
credentials@2.1.16
ssh-credentials@1.13
plain-credentials@1.4
credentials-binding@1.13
gradle@1.28
pipeline-input-step@2.8
apache-httpcomponents-client-4-api@4.5.3-2.0
junit@1.23
windows-slaves@1.3.1
display-url-api@2.2.0
mailer@1.20
matrix-auth@2.2
antisamy-markup-formatter@1.5
matrix-project@1.12
jsch@0.1.54.1
git-client@2.7.0
pam-auth@1.3
authentication-tokens@1.3
docker-commons@1.11
ace-editor@1.1
jquery-detached@1.2.1
workflow-scm-step@2.6
workflow-cps@2.42
docker-workflow@1.14
jackson2-api@2.8.10.1
github-api@1.90
git@3.7.0
workflow-job@2.12.2
token-macro@2.3
github@1.28.1

Now, to import the plugins to the new Jenkins instance, use the command below with the URL of the Jenkins target instance as an argument:

1
butler plugins import --server localhost:8080 --username admin --password admin


Butler will install each plugin on the target Jenkins instance by issuing API calls.

2 – Jobs Management



To export Jenkins jobs, just provide the URL of the source Jenkins server:

1
butler jobs export --server localhost:8080 --username admin --password admin


A new directory jobs/ will be created with every job in Jenkins. Each job will have its own configuration file config.xml.



Now, to import the jobs to the new Jenkins instance, issue the following command:



1
butler jobs import --server localhost:8080 --username admin --password admin


Butler will use the configuration files created earlier to issue API calls to target Jenkins instance to create jobs.

Once you are done, check Jenkins and you should see your jobs successfully created :



Hope it helps ! The CLI is still in its early stages, so you are welcome to contribute to the project in GitHub.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Cleanup old Docker images from Nexus Repository

Many of us, are using Nexus as a repository to publish Docker Images. Typically we build images tagged with the commit hash (or using semver ideally) after SCM change automatically in CI and we push them to registry. As result there are many “unneeded” & “old” images that in our case take significant amount of disk space.



I looked around the graphical interface of Nexus and there’s apparently nothing to remove several Docker images at the same time. Or even, a scheduled task to clean up old hosted Docker images, and to also clean up layers which are no longer used by any hosted images.



So I have come up with a simple bash script which uses Docker Registry API to purge Docker images and keep the last X images and delete all other. But, is there a better solution ? YES ! I built a Nexus CLI.

To install Nexus CLI, find the appropriate package for your system and download it. For linux:

1
wget https://s3.eu-west-2.amazonaws.com/nexus-cli/1.0.0-beta/linux/nexus-cli

After downloading Nexus CLI. Add the execution permission to the binary:

1
chmod +x nexus-cli


Note: For Windows make sure that nexus-cli binary is available on the PATH. This page contains instructions for setting the PATH on Windows.

After installing, verify the installation worked, by opening a new terminal session and checking if nexus-cli is available :



Once done, configure the Nexus credentials:

1
nexus-cli configure


Through nexus-cli configure, the Nexus CLI will prompt you for four pieces of information. The Username and Password are your account credentials. Nexus Hostname & Docker repository name.

That should be it. Try out the following command from your cmd prompt and, if you have any images, you should see them listed

1
nexus-cli image ls


Display image tags:

1
nexus-cli image tags -name IMAGE_NAME


Image description:

1
nexus-cli image info -name IMAGE_NAME -tag TAG


To remove a specific image:

1
nexus-cli image delete -name IMAGE_NAME -tag TAG


To keep only the last X images and delete all other:

1
nexus-cli image delete -name IMAGE_NAME -keep X


That’s it ! Let’s go back to Nexus Dashboard:



As you can see, Nexus kept only the last 4 images and deleted the others.



The CLI is still in its early stages, so you are welcome to contribute to the project in GitHub.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×