Smoothwall Firewall project

Saturday, 4 November 2017

DNS resolution in Docker containers with Ubuntu Artful on AWS

This post is solution to a problem I discovered - so I hope others will find it useful.
Spinning up AWS Ubuntu Zesty - 17.04 - images with Docker installed was straight forward with Ansible and Terraform , but then arrived Ubuntu Artful - 17.10 , and the containers spun up could not resolve DNS, regardless of which version of Docker I installed.
After a lot of testing , it appeared to me that the host computer was passing through the wrong DNS server entry into resolv.conf within the container - so it would never work.
The Solution:
With systemd and docker, the preferred way to change a daemon setting is to create a new file in /etc/docker called daemon.json.
In that file add the following to get it use the AWS VPC default DNS resolver - - like so
   "dns": [""]
Restart the docker daemon , and the containers can now resolve DNS. There may be other ways to resolve this issue, but this works perfectly , and uses methods preferred by the docker community.
I hope this helps others who may run into this problem.
Other settings that can be made in that file can be found here. Dockerd settings documentation

Tuesday, 6 June 2017

Getting apps to work in cutting edge Ubuntu Docker containers when they grizzle about locales.

Whilst running up an Artful Aardvark Ubuntu docker container on a Zesty Ubuntu host server, I received a message that it couldn't load certain apps due to the locales for UTF-8 were not configured.

This was annoying, but I initially worked around it , by installing the locales-all package into the container - it worked - but it bloated the container considerable.

There is a better and simpler way, which I found after a lot of digging around in the Docker documentation, as others must have hit this issue before.

What you need to do is set the following env variable in your Dockerfile when you build your container, and the problem does get solved for most apps, like tmux and screen within the container.

If you find that this doesn't cure it for your application , you may need to move to the next step and include the following in your Dockerfile.

RUN apt-get update && apt-get install -y locales \
&& rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A \
/usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8

Friday, 12 May 2017

Getting an elastic search shard to relocate after a cluster nodes disk had become full

I came into work today to find one of the test environments elastic search(ES) nodes had run out of disk space. A bad job had gone berserk overnight and filled the logs - which then filled ES nodes disks.

So what to do with a volume with 100% utilisation? After chatting to a few colleagues , it was decided to delete one of the earlier indices - as the data in the dev environment was not life or death.

I first tried this from the GUI - which didn't work at all , so I switched to the CLI and issued the following

curl -XDELETE curl localhost:9200/mydodgylogs-2017.05.08 

That did the trick and we now had 80% disk utilisation, so I was expecting the shards to sort themselves calmly out. Unfortunately no go - there was one shard that was still refusing to relocate and it was effectively marking the cluster as yellow, so I had another chat with my colleague and he informed of a recovery log file which could make this happen. Effectively the disk being full had left the shard in an unstable state, and needed some help to sort itself out.

So on the cli again, I found the offending shard directory in the correct index and removed the following file


As soon as that was removed the shard relocated fine and the system went back to being happy and green.

As it took some time with a few colleagues to get to the bottom of the problem, I thought others may find it useful in the future.

Running the normal health check command then gave me the following healthy output.

curl localhost:9200/_cluster/health?pretty

  "cluster_name" : "myclustername",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 45,
  "active_shards" : 90,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0


Wednesday, 26 August 2015

Using S3cmd to list the ACL's of all the files in an S3 bucket

In the absence of any replies from the S3cmd forum, I managed to use a command line hack to get the ACL status of all files in a bucket with this:

 I tried parsing the whole bucket with an "astrix" as an option , but that didn't work with version 1.0.1 or 1.0.5 of S3cmd. 

s3cmd -c ~/.s3cfg_uk2 ls s3://test-hubs/ | awk '{print $4}' | sed 's/s3\:\/\/test-hubs\///g' | xargs -I file s3cmd -c ~/.s3cfg_uk2 info s3://test-hubs/file 

 This is all on one line - and the xargs option is a capitol "i" and not an "el" as it appears here ;-) If anyone can see how to refactor this to make it more efficient be my guest, but it works.

Or indeed answer my original question with a snappy command line option to s3cmd ;-)

If you want to see if there is "anyone" access to a file you will see "anon" as the ACL setting , so you can search on that if you want to look for globally available files - which is what I wanted to do.

Thursday, 11 June 2015

VM created by boot2docker init does not have DNS set up correctly - heres how to fix it

I came across this problem while setting up Docker on my OSX environment , and after searching the web , I found a method that works well, and cures the problem. Basically once the Virtualbox VM is created you need to edit the network adapter drivers and set them to "PCNET-FAST III" instead of the paravirtualised drivers. You need to do both adapters, and then restart boot2docker. You will notice that you get the correct settings now in your /etc/resolv.conf file. This was annoying me, and meant I had to reset the settings in the above file everytime I start docker - which is not ideal. Hope this saves you some time.

Friday, 3 April 2015

Little trick I found helpful programming the DigitalOcean v2 API - create a new droplet

While looking through several examples I found a few good links on how to create a new Virtual machine(VM) using the Ruby programming language, and the digital ocean gem they provide.

They all however left off one important point out, that you have to specify the SSH key id number as part of the call to create a new VM.

Without this last extra bit of the puzzle , you end up with a VM that is up and running, but you can't use your ssh keys to log into.
 - not ideal ;-)

The ssh keys have to be specified as a ruby array, as you could want more than one ssh key associated with your VM's once they have been created - so here is an example - with the missing piece inserted. The token mentioned in this code segment - is your oauth token that you create when you set up your digital ocena account. You can set it up as an env variable - but I have not shown how to do this here.

To start you need to install this gem into your Ruby environment

gem install droplet_kit


require 'droplet_kit'
client = token)
droplet = '', region: 'nyc3', size: '1gb', image: 'ubuntu-14-04-x64', ssh_keys: [1234567])

The code is downloadable from Github gist link below.

You can find out the id number of your ssh keys with the following bash script , so that you know which id's to put into the array.

curl -X GET -H 'Content-Type: application/json' -H "Authorization: Bearer $TOKEN" "" | jq "."

Where $TOKEN is you oAuth API v2 key.

Hope this saves you time getting this up and running.

Ref blog post and Github gist of the code:
How to use the DigitalOcean v2API
Github Gist of the code

Sunday, 31 August 2014

Using Docker to run rtorrent to reduce your servers resource requirements

Let me state early and clearly , any one who thinks Docker is a new technology, knows nothing about IT and even less about virtualisation. So if some smartalec in your office starts spouting about this new technology, take them down a peg or two with a few links to Solaris containers, OpenVZ or LXC.

What the Docker team have done extremely well, is make these ancient technologies very accessible and very easy to use, with a well defined tool set and API. This they must be highly praised for.

In the words of Einstein however, they work on the shoulders of giants, who did a lot of the heavy lifting, and let us all not forget this.

I have been using containers for years, and we worked on a very successful cloud at Nokia using OpenVZ, where we built our own tools.

So I thought I would give Docker a spin on one of my cloud servers and just kick the tyres to start. I picked an application I use a lot for downloading Linux ISO's so it seemed a good choice. It was very straight forward indeed.

The host operating system was CentOS 6.5 - which I'm growing less fond of as each week passes, as in the fast moving cloud space, Ubuntu is simply better. It was installed however, and I couldn't be bothered to change it. You need to enable the EPEL repository and install docker with yum.

I decided to download and use the userland tools of a Docker Ubuntu image - but whatever image you choose - the host kernel is the one that will be used. This is to do with the historically well thought out ABI built into the Linux kernel that allows this all to work.

Once the image was retrieved from the Docker registry - I fired up a container with Ubuntu 14.04 tools and installed screen, rtorrent , vim and htop.

I always run rtorrent with screen so I can just leave it running and come back to it when required. Also importantly - when starting your container use /bin/bash so you can have an interactive session with it to be able to go back and check your screen/rtorrent session. A command like the following will do.

docker run -i -t my-ubuntu-image /bin/bash

Be care however when you want to exit this container , DON'T type exit but CTRL-p, CTRL-q , so everything keeps running and you can reattach to the container when you wish to check on progress.

This uses significantly less resources than spinning up a KVM virtual image to do the same job, as it uses the resources of the host system that are already running.

I have deliberately not put all the commands need to do this here , as the Docker documentation is good and very clear, so duplication is pointless