Smoothwall Firewall project

Friday, 12 May 2017

Getting an elastic search shard to relocate after a cluster nodes disk had become full

I came into work today to find one of the test environments elastic search(ES) nodes had run out of disk space. A bad job had gone berserk overnight and filled the logs - which then filled ES nodes disks.

So what to do with a volume with 100% utilisation? After chatting to a few colleagues , it was decided to delete one of the earlier indices - as the data in the dev environment was not life or death.

I first tried this from the GUI - which didn't work at all , so I switched to the CLI and issued the following

curl -XDELETE curl localhost:9200/mydodgylogs-2017.05.08 

That did the trick and we now had 80% disk utilisation, so I was expecting the shards to sort themselves calmly out. Unfortunately no go - there was one shard that was still refusing to relocate and it was effectively marking the cluster as yellow, so I had another chat with my colleague and he informed of a recovery log file which could make this happen. Effectively the disk being full had left the shard in an unstable state, and needed some help to sort itself out.

So on the cli again, I found the offending shard directory in the correct index and removed the following file


As soon as that was removed the shard relocated fine and the system went back to being happy and green.

As it took some time with a few colleagues to get to the bottom of the problem, I thought others may find it useful in the future.

Running the normal health check command then gave me the following healthy output.

curl localhost:9200/_cluster/health?pretty

  "cluster_name" : "myclustername",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 45,
  "active_shards" : 90,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0


Wednesday, 26 August 2015

Using S3cmd to list the ACL's of all the files in an S3 bucket

In the absence of any replies from the S3cmd forum, I managed to use a command line hack to get the ACL status of all files in a bucket with this:

 I tried parsing the whole bucket with an "astrix" as an option , but that didn't work with version 1.0.1 or 1.0.5 of S3cmd. 

s3cmd -c ~/.s3cfg_uk2 ls s3://test-hubs/ | awk '{print $4}' | sed 's/s3\:\/\/test-hubs\///g' | xargs -I file s3cmd -c ~/.s3cfg_uk2 info s3://test-hubs/file 

 This is all on one line - and the xargs option is a capitol "i" and not an "el" as it appears here ;-) If anyone can see how to refactor this to make it more efficient be my guest, but it works.

Or indeed answer my original question with a snappy command line option to s3cmd ;-)

If you want to see if there is "anyone" access to a file you will see "anon" as the ACL setting , so you can search on that if you want to look for globally available files - which is what I wanted to do.

Thursday, 11 June 2015

VM created by boot2docker init does not have DNS set up correctly - heres how to fix it

I came across this problem while setting up Docker on my OSX environment , and after searching the web , I found a method that works well, and cures the problem. Basically once the Virtualbox VM is created you need to edit the network adapter drivers and set them to "PCNET-FAST III" instead of the paravirtualised drivers. You need to do both adapters, and then restart boot2docker. You will notice that you get the correct settings now in your /etc/resolv.conf file. This was annoying me, and meant I had to reset the settings in the above file everytime I start docker - which is not ideal. Hope this saves you some time.

Friday, 3 April 2015

Little trick I found helpful programming the DigitalOcean v2 API - create a new droplet

While looking through several examples I found a few good links on how to create a new Virtual machine(VM) using the Ruby programming language, and the digital ocean gem they provide.

They all however left off one important point out, that you have to specify the SSH key id number as part of the call to create a new VM.

Without this last extra bit of the puzzle , you end up with a VM that is up and running, but you can't use your ssh keys to log into.
 - not ideal ;-)

The ssh keys have to be specified as a ruby array, as you could want more than one ssh key associated with your VM's once they have been created - so here is an example - with the missing piece inserted. The token mentioned in this code segment - is your oauth token that you create when you set up your digital ocena account. You can set it up as an env variable - but I have not shown how to do this here.

To start you need to install this gem into your Ruby environment

gem install droplet_kit


require 'droplet_kit'
client = token)
droplet = '', region: 'nyc3', size: '1gb', image: 'ubuntu-14-04-x64', ssh_keys: [1234567])

The code is downloadable from Github gist link below.

You can find out the id number of your ssh keys with the following bash script , so that you know which id's to put into the array.

curl -X GET -H 'Content-Type: application/json' -H "Authorization: Bearer $TOKEN" "" | jq "."

Where $TOKEN is you oAuth API v2 key.

Hope this saves you time getting this up and running.

Ref blog post and Github gist of the code:
How to use the DigitalOcean v2API
Github Gist of the code

Sunday, 31 August 2014

Using Docker to run rtorrent to reduce your servers resource requirements

Let me state early and clearly , any one who thinks Docker is a new technology, knows nothing about IT and even less about virtualisation. So if some smartalec in your office starts spouting about this new technology, take them down a peg or two with a few links to Solaris containers, OpenVZ or LXC.

What the Docker team have done extremely well, is make these ancient technologies very accessible and very easy to use, with a well defined tool set and API. This they must be highly praised for.

In the words of Einstein however, they work on the shoulders of giants, who did a lot of the heavy lifting, and let us all not forget this.

I have been using containers for years, and we worked on a very successful cloud at Nokia using OpenVZ, where we built our own tools.

So I thought I would give Docker a spin on one of my cloud servers and just kick the tyres to start. I picked an application I use a lot for downloading Linux ISO's so it seemed a good choice. It was very straight forward indeed.

The host operating system was CentOS 6.5 - which I'm growing less fond of as each week passes, as in the fast moving cloud space, Ubuntu is simply better. It was installed however, and I couldn't be bothered to change it. You need to enable the EPEL repository and install docker with yum.

I decided to download and use the userland tools of a Docker Ubuntu image - but whatever image you choose - the host kernel is the one that will be used. This is to do with the historically well thought out ABI built into the Linux kernel that allows this all to work.

Once the image was retrieved from the Docker registry - I fired up a container with Ubuntu 14.04 tools and installed screen, rtorrent , vim and htop.

I always run rtorrent with screen so I can just leave it running and come back to it when required. Also importantly - when starting your container use /bin/bash so you can have an interactive session with it to be able to go back and check your screen/rtorrent session. A command like the following will do.

docker run -i -t my-ubuntu-image /bin/bash

Be care however when you want to exit this container , DON'T type exit but CTRL-p, CTRL-q , so everything keeps running and you can reattach to the container when you wish to check on progress.

This uses significantly less resources than spinning up a KVM virtual image to do the same job, as it uses the resources of the host system that are already running.

I have deliberately not put all the commands need to do this here , as the Docker documentation is good and very clear, so duplication is pointless

Saturday, 19 July 2014

Industries illogical use of Agencies in IT contractor procurement - a humorous musing

I thought I would write a light hearted look at what I genuinely consider to be the most illogical waste of companies money, in paying the middle men. These middle men take anything from 10-30% off the top of other peoples work, for doing virtually nothing.

Hiring IT managers know the sort of people they need, but instead of putting an advert directly on their web site or via aggregated sites like Jobserve, they employ the services of an agency. Here in lies the problem, as most agents have a strange and various past employment history and almost exclusively know next to nothing about complex IT systems. They may have heard of Windows if you're lucky, but start talking about Linux, Java, Ruby and Cloud Services and they start to glaze over

So in an attempt to consider what might be going through an ex-RSPCA dog handlers mind when talking to an IT hiring manager - here is my stab.

1) Agile - hmm so he wants the new guy to do yoga - probably a small office

2) Puppet - ok that makes sense he can keep the kids happy with a show on "bring your kids to work day"

3) Chef - good point , saves on kitchen staff as he can cook lunch while programming

4) Java - well if he is making the lunch he might as well make the coffee as well.

5)Tomcat - lion taming is a great skill, and could work well while the kids are in

6) VMware - probably similar to Tupaware - but why he wants him to sell home goods is a bit odd

7) Solaris - well green energy is in vogue - so sola panel knowledge could be handy

8) Perl - Knowledge of jewellery is always handy - especially to please the bosses wife

9) Ruby - Boy he really is into his Jewellery, I will have to check what they do for a business.

10) Python - that is just odd, perhaps he keeps snakes in the office as pets. Hope they don't escape, could cause chaos with the Lion.

This is obviously not an exhaustive list , and I could have been a lot harsher, but I hope it makes you smile like it does me when I see job adverts for Pearl developers - do they want me to polish gems for a living.

Saturday, 28 June 2014

Is using email still relevant? Hell yes

I have been working in IT long enough to remember many changes in technology usage. I can remember just missing out on having to use punch cards to get my programs loaded into the mainframe.

So I can remember a time when people seriously did not think that email would or could be useful. I have actually worked in companies where the senior managers would get their secretaries to print out their emails and give them to them on paper. I can also vividly recall the many hours spent convincing these people of the benefit of email , and how it could save them time and money.

If you look at the world today through your 21st Century eyes, you would probably find those sentiments cute, if not funny, however I'm beginning to hear statements in my technology reading about the post email era, and I really think this would be a bad thing for many busy people.

I know we all get fed up with the spam we get, and the unsolicited contacts etc. but with a well set up email configuration, and not clicking on all those "send me updates" boxes, this can be controlled fairly easily.

Now I'm not some techno Luddite arguing from the sidelines as your knowledge gets swept away with yesterdays chip papers, but from a very practical perspective. I have shifted to using the mobile technology paradigm like everyone else, and use Whatsapp, Facebook Chat, Twitter and Wechat as well as the next man, but they have a big drawback. They are all great tools and I can see a use for them going forward , no doubt. However on your mobile devices they demand your attention , and are constant interrupts to your busy day. They are like a nagging yappy dog , constantly demanding to get in your face. Yes you can turn them down or the notifications off , but you have just killed 9/10ths of the usefulness of them in the first place. They are either in your face , or why bother using them at all?

The absolute beauty of email is it's asynchronous nature, and you can choose the time and place that you wish to read or respond to what has arrived in your inbox. It has also got thirty years of tooling development surrounding it, so finding and dealing with issues that took place a year ago is a breeze. Try doing that with Whatsapp - good luck with that.

Skype does a better job than most for recording what you have been up to with conversations, but it is still time limited. You can literally search you email threads from the day you opened your account, which has saved my bacon on more than one occasion.

I think the tricks to using email wisely are to be ruthless on your inbox, make sure you have a sensible structure with the folders/tags of the email you wish to keep and constantly look to help your spam engine to get rid of rubbish.

I'm sure email will be replaced at some point with probably more human friendly methods of communication , like video or on-the-fly video conversation recording. The main issue there will be the extremely complex nature of search, not a trivial issue with video. Lets hope the worlds computer scientists are on the case as I type ;-)