Smoothwall Firewall project

Saturday, 9 March 2019

Staying safe on the Web - what can I do to make my browsing more secure and leach less data.


Friends and family often ask me about technologies they can use to make their lives just that little bit safer on the Web. So to save me having to answer the same questions repeatedly - I thought I would write a blog post to just highlight the tools,apps and extensions I use to make it better than just connecting to the web and hoping for the best.
  • Use a VPN whenever and wherever you are. There are so many good and inexpensive examples to use these days - there is really no excuse not too. I hear good things about this one ExpressVPN. I use my own - but this should be a good one. This will work with you PC, Ipad, mobile phone. So you will be covered whenever you decide to get some free internet in a cafe  you have never been to before :-)
  • Use Mozilla Firefox, Safari or Opera as your main browser. I know Google Chrome offers many features, but can you honestly trust Google to not be constantly looking to take your data and use it? I certainly don't trust Chrome anymore.
  • Install a good set of extensions to stop trackers and unwanted information leaching.
    • Adblock Plus
    • Ghostery
    • uBlock Origin
    • DuckDuckGo privacy essentials
    • Privacy Badger
    • HTTPS Everywhere
  • Don't use Google as your default search engine - switch to using DuckDuckGo - it is an option on all modern browers - just change the default. You will be amazed at how all the targeted ads suddenly stop appearing everywhere - because you will have stopped Google building a complete profile of you on the web.
  • Use a password manager to ensure strong passwords on all sites you use. Two good examples are Lastpass or 1Password
  • Where sites allow it use 2FA - 2 Factor Authentication - on all sites. Not all sites do - but check and where you can implement it. 
  • If you must use Facebook - I recommend you don't - then install an extension that puts it in a sandbox container. This will reduce the amount of data you will leach from that app. 
  • Always look to use an anti-virus product - there are many to choose from - I use AVG
  •  Make regular backups - so if you machine does get hijacked your have always got access to you valuable files.
There are other add-ons to stop javascript - which can stop a lot of nasty attacks - however - it can make a huge difference to the way the web looks and feels and a lot of sites depend on it. So unless you know what you are doing , I would stay clear of that to start.

Updated: 15/7/2019 
 
Useful reading on the Topic :

Saturday, 4 November 2017

DNS resolution in Docker containers with Ubuntu Artful on AWS

This post is solution to a problem I discovered - so I hope others will find it useful.
Spinning up AWS Ubuntu Zesty - 17.04 - images with Docker installed was straight forward with Ansible and Terraform , but then arrived Ubuntu Artful - 17.10 , and the containers spun up could not resolve DNS, regardless of which version of Docker I installed.
After a lot of testing , it appeared to me that the host computer was passing through the wrong DNS server entry into resolv.conf within the container - so it would never work.
The Solution:
With systemd and docker, the preferred way to change a daemon setting is to create a new file in /etc/docker called daemon.json.
In that file add the following to get it use the AWS VPC default DNS resolver - 10.0.0.2 - like so
{
   "dns": ["10.0.0.2"]
}
Restart the docker daemon , and the containers can now resolve DNS. There may be other ways to resolve this issue, but this works perfectly , and uses methods preferred by the docker community.
I hope this helps others who may run into this problem.
Other settings that can be made in that file can be found here. Dockerd settings documentation

Tuesday, 6 June 2017

Getting apps to work in cutting edge Ubuntu Docker containers when they grizzle about locales.




Whilst running up an Artful Aardvark Ubuntu docker container on a Zesty Ubuntu host server, I received a message that it couldn't load certain apps due to the locales for UTF-8 were not configured.

This was annoying, but I initially worked around it , by installing the locales-all package into the container - it worked - but it bloated the container considerable.

There is a better and simpler way, which I found after a lot of digging around in the Docker documentation, as others must have hit this issue before.

What you need to do is set the following env variable in your Dockerfile when you build your container, and the problem does get solved for most apps, like tmux and screen within the container.

ENV LANG C.UTF-8
If you find that this doesn't cure it for your application , you may need to move to the next step and include the following in your Dockerfile.

       
RUN apt-get update && apt-get install -y locales \
&& rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A \
/usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
       
 

Friday, 12 May 2017

Getting an elastic search shard to relocate after a cluster nodes disk had become full



I came into work today to find one of the test environments elastic search(ES) nodes had run out of disk space. A bad job had gone berserk overnight and filled the logs - which then filled ES nodes disks.

So what to do with a volume with 100% utilisation? After chatting to a few colleagues , it was decided to delete one of the earlier indices - as the data in the dev environment was not life or death.

I first tried this from the GUI - which didn't work at all , so I switched to the CLI and issued the following

curl -XDELETE curl localhost:9200/mydodgylogs-2017.05.08 

That did the trick and we now had 80% disk utilisation, so I was expecting the shards to sort themselves calmly out. Unfortunately no go - there was one shard that was still refusing to relocate and it was effectively marking the cluster as yellow, so I had another chat with my colleague and he informed of a recovery log file which could make this happen. Effectively the disk being full had left the shard in an unstable state, and needed some help to sort itself out.

So on the cli again, I found the offending shard directory in the correct index and removed the following file

 mydodgylogs-2017.05.08/4/translog/translog-1234567890.recovering

As soon as that was removed the shard relocated fine and the system went back to being happy and green.

As it took some time with a few colleagues to get to the bottom of the problem, I thought others may find it useful in the future.

Running the normal health check command then gave me the following healthy output.

curl localhost:9200/_cluster/health?pretty

{
  "cluster_name" : "myclustername",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 45,
  "active_shards" : 90,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0

}


Wednesday, 26 August 2015

Using S3cmd to list the ACL's of all the files in an S3 bucket


In the absence of any replies from the S3cmd forum, I managed to use a command line hack to get the ACL status of all files in a bucket with this:

 I tried parsing the whole bucket with an "astrix" as an option , but that didn't work with version 1.0.1 or 1.0.5 of S3cmd. 

s3cmd -c ~/.s3cfg_uk2 ls s3://test-hubs/ | awk '{print $4}' | sed 's/s3\:\/\/test-hubs\///g' | xargs -I file s3cmd -c ~/.s3cfg_uk2 info s3://test-hubs/file 

 This is all on one line - and the xargs option is a capitol "i" and not an "el" as it appears here ;-) If anyone can see how to refactor this to make it more efficient be my guest, but it works.

Or indeed answer my original question with a snappy command line option to s3cmd ;-)


If you want to see if there is "anyone" access to a file you will see "anon" as the ACL setting , so you can search on that if you want to look for globally available files - which is what I wanted to do.

Thursday, 11 June 2015

VM created by boot2docker init does not have DNS set up correctly - heres how to fix it

I came across this problem while setting up Docker on my OSX environment , and after searching the web , I found a method that works well, and cures the problem. Basically once the Virtualbox VM is created you need to edit the network adapter drivers and set them to "PCNET-FAST III" instead of the paravirtualised drivers. You need to do both adapters, and then restart boot2docker. You will notice that you get the correct settings now in your /etc/resolv.conf file. This was annoying me, and meant I had to reset the settings in the above file everytime I start docker - which is not ideal. Hope this saves you some time.

Friday, 3 April 2015

Little trick I found helpful programming the DigitalOcean v2 API - create a new droplet



While looking through several examples I found a few good links on how to create a new Virtual machine(VM) using the Ruby programming language, and the digital ocean gem they provide.

They all however left off one important point out, that you have to specify the SSH key id number as part of the call to create a new VM.

Without this last extra bit of the puzzle , you end up with a VM that is up and running, but you can't use your ssh keys to log into.
 - not ideal ;-)

The ssh keys have to be specified as a ruby array, as you could want more than one ssh key associated with your VM's once they have been created - so here is an example - with the missing piece inserted. The token mentioned in this code segment - is your oauth token that you create when you set up your digital ocena account. You can set it up as an env variable - but I have not shown how to do this here.

To start you need to install this gem into your Ruby environment

gem install droplet_kit


#!/usr/bin/ruby

require 'droplet_kit'
token=ENV['Oauth_key']
client = DropletKit::Client.new(access_token: token)
droplet = DropletKit::Droplet.new(name: 'example.com', region: 'nyc3', size: '1gb', image: 'ubuntu-14-04-x64', ssh_keys: [1234567])
client.droplets.create(droplet)

The code is downloadable from Github gist link below.

You can find out the id number of your ssh keys with the following bash script , so that you know which id's to put into the array.


curl -X GET -H 'Content-Type: application/json' -H "Authorization: Bearer $TOKEN" "https://api.digitalocean.com/v2/account/keys" | jq "."

Where $TOKEN is you oAuth API v2 key.

Hope this saves you time getting this up and running.

Ref blog post and Github gist of the code:
How to use the DigitalOcean v2API
Github Gist of the code