Cleanup old Docker images from Nexus Repository

Many of us, are using Nexus as a repository to publish Docker Images. Typically we build images tagged with the commit hash (or using semver ideally) after SCM change automatically in CI and we push them to registry. As result there are many “unneeded” & “old” images that in our case take significant amount of disk space.

I looked around the graphical interface of Nexus and there’s apparently nothing to remove several Docker images at the same time. Or even, a scheduled task  to clean up old hosted Docker images, and to also clean up layers which are no longer used by any hosted images.

So I have come up with a simple bash script which uses Docker Registry API to purge Docker images and keep the last X images and delete all other. But, is there a better solution ? YES ! I built a Nexus CLI

To install Nexus CLI, find the appropriate package for your system and download it. For linux:

After downloading Nexus CLI. Add the execution permission to the binary:

Note: For Windows make sure that nexus-cli binary is available on the PATHThis page contains instructions for setting the PATH on Windows.

After installing, verify the installation worked, by opening a new terminal session and checking if nexus-cli is available :

Once done, configure the Nexus credentials:

Through nexus-cli configure, the Nexus CLI will prompt you for four pieces of information. The Username and Password are your account credentials. Nexus Hostname & Docker repository name.

That should be it. Try out the following command from your cmd prompt and, if you have any images, you should see them listed

Display image tags:

Image description:

To remove a specific image:

To keep only the last X images and delete all other:

That’s it ! Let’s go back to Nexus Dashboard:

As you can see, Nexus kept only the last 4 images and deleted the others.

Résultat de recherche d'images pour "awesome meme"

The CLI is still in its early stages, so you are welcome to contribute to the project in Github.

Setting up an etcd cluster on AWS using CoreOS & Terraform

This post is part of “IaC” series explaining how to use Infrastracture as Code concepts with Terraform. In this part, I will show you how to setup an etcd cluster on AWS using CoreOS & Terraform as shown in the diagram below :

All the templates used in this demo can be found on my Github 😁.

So let’s start with “variables.tf” file which contains the global variables such as AWS region, cluster instances type …

Note: As of writing this article, the latest stable CoreOS version is 1465.6.0.

So make sure to find an AMI that is as close to the latest version as possible.

Next, we need to define a security group for our cluster. For simplicity, Im going to make this security group open to the world. Even though security is important, this tutorial serves an educational purposes and you should never have all ports open in production.

And finally, we will define our cluster which consists of 3 Nodes:

In order to bring up an etcd cluster, I used a cloud config file that I passed as a parameter to user_data attribut:

Note: Make sure to grab the discovery token, and place it into the discovery parameter:

Once you defined all templates required, just type the following command to bring up the etcd cluster:

Note: Don’t forget  to set the AWS credentials as an envrionment variables before:

Setting up an etcd cluster in action is shown below 😎 :

Once done, go to your AWS Management Console then navigate to your EC2 Dashboard:

Congratulations ! 🎉🎉 You have your CoreOS cluster.

To verify the cluster health, you can either point your browser to the discovery url you generated earlier:

or SSH to one of your cluster nodes using the command:

Then, use the etcd command line to fetch the cluster status:

Now we have an etcd cluster ready to use. Let’s see what we can do with it:

  • Through etcdctl:

  • Through HTTP API: