Smoothwall Firewall project

Thursday 25 January 2024

How to simply build and maintain your own VPN service for complete privacy

TorGuard VPN review: A serviceable VPN | Macworld

One of the best ways of staying safe on the web is to use a VPN(Virtual Private Network), which keeps all your traffic across the internet encrypted, and more importantly away from prying eyes.

Now the easiest way for most people to do this is via a paid for service, and for 95% of the population that's perfectly fine. However, the fact you are using someones else's service for computer professionals leaves a worry that logs(although they claim they don't keep them) could be kept on where you visited and when. This is unacceptable to me.

So I have built my own service using these basic tools.

  1. Amazon EC2 service
  2. Wireguard kernel VPN service docker container
  3. Ubuntu 22.04(soon to be 24.04) operating system built on an Amazon AMI virtual instance
  4. Terraform to build the whole service from scratch
  5. Ansible to configure and constantly keep state of the virtual instance.
  6. Docker and Docker Compose 

Now there is a lot of programming code that is tucked away in my private gitlab instance for this, but I will describe how the process works and how the system gets built from scratch.

Basically the Terraform code builds the infrastructure and Ubuntu Virtual Machine(VM) that the VPN will work on. Once this code has run, it will have created everything - including a static IPv4 and IPv6 address that the VPN can use.

Once the VM is up and running, and I can successfully log onto the machine with SSH, I then run Ansible code to build all the required parts of the system, and upload the docker compose code ready for building the docker container that will be the VPN server endpoint.

Once that has all been completed and the docker container is up and running, I test that the new container is accessible from my local laptop using netcat - via the AWS security groups that were built with Terraform.

The initialisation of the Wireguard docker container creates several user configuration files that are required by any client application that wishes to connect to this VPN. 

With the Wireguard client installed on any laptop or desktop, you can now connect to the VPN anywhere in the world that you created the VM in. This allows for great flexibility and availability.

Also as Docker containers are so efficient and small in size, this allows the VM to be used for many other task's using other docker containers, like OpenVpn, Transmission, Gitlab etc etc.

I have been using this method for many years now and it is rock solid and extremely reliable. 

This supports a small team of users, but could be easily scaled to support many more users, and multiple instances would provide resilience.

Another major advantage is this can all be run on an AWS micro instance, so is inexpensive, though slightly more expensive than just using a paid for service, you have 100% control and know that all logs can be deleted on a daily basis and the VM can be destroyed and re-built in minutes in another region, availability zone or any other reason. 

With the way the Internet is going and every company wishing to snoop on your work and activities, then the more you can keep things private, the better.

Friday 10 February 2023

Things discovered while installing pi-hole service as a docker container on a Raspberry Pi 8GB Pi4 (Raspbian Buster OS)

Docker Data Containers

Things discovered while installing pi-hole service as a docker container on a Raspberry Pi 8GB Pi4  (Raspbian Buster OS)

Using the docker compose file from the GitHub installation site it was obvious that this version of the OS was not compatible with the latest versions of Pihole - so the first task was to up grade the OS to Bullseye.

The pihole docker container install site is here.

This website offers a great simple method. It’s not as seamless as Ubuntu - but also not rocket science either.

The first problem was there was a service already sitting on port 53 - so this has to be disabled as Pihole uses this port for it's service. The eventual truth was there were actually two services fighting over this port - systemd-resolver and connman dnsproxy.

They both need separate solutions to stop them.  

Systemd-resolver needs to have it’s config file altered as follows,
Change DNS= and DNSStubListener=no - then restart the service.

With connman you need to follow the instructions in this blog post to turn off it’s attempt to grab that port

Once this was completed - the docker container would still not start - it was giving a spurious error message about IPv6 issues.

On further reading around - it appeared that the version of docker requires to be version 20 or above, so I updated that in Bullseye as follows using:

sudo apt install

After all this the docker container started correctly and the Raspberry Pi is now function as our DNS proxy for the whole house network to try and stop all the crap the web throws at you.

Thursday 23 April 2020

Switching on DNS over HTTPS on various browsers

A new feature that is hitting all new modern web browsers is the ability to turn on DNS over HTTPS, which in my opinion is a very good idea - to keep the ISP snoopers off your traffic - so they have no idea where you are looking up or searching for. Not all Browsers have this facility at the moment - but I will cover those that do.

This by association is a recommendation for those that do.

Mozilla Firefox:

To turn this feature on in Firefox go to Preferences/Network/Settings and the just select the DNS over HTTPS as shown below.

Google Chrome Chromium:

To turn this feature on in Chrome or Chromium, open a new tab and type chrome://flags and in the search bar type dns and enter. The following will appear and just select enable for DNS over HTTPS


As this browser is based on Chromium you have to type opera://flags and then follow the above.

Microsoft Edge:

Again, this browser is based on Chromium, only this time just type edge://flags and then follow the above.

Apple Safari and Microsoft Internet Explorer:

I'm afraid at the time of writing the above two don't support it. Safari may well in the future, but I very much doubt Internet Explorer ever will.

Wednesday 15 April 2020

Switching to a small footprint Intel NUC computer to do everything - unexpectedly surprised at performance

I don't know about you, but I have a variety of computer systems in my house for a whole range of uses. Everything from a Mac mini to an Amazon Firestick, all doing their job for the task required. However the Mac minis I have serve as media servers and players but are getting a bit long in the tooth, so I decided to upgrade the media server with an Intel NUC.

I decided to spec it as fully as possible and gave it a 6 Core processor, 32 GB RAM, and a 1TB m.2 NVME SSD drive. This should make it future proof for a good few years, that was the thinking. It came in at around £700, which was way less than a new Mac Mini.

Its main purpose in life was to support 4 USB 3.1 Gen2 external hard drive boxes (Akitio) for all my media and backups. If it was capable of anything else that would be a bonus, but not expected. It is attached to a 32" Samsung curved monitor with 144Hz refresh at 2.5+K - crystal clear and super responsive.

I installed Xubuntu as the base operating system, though I only use openbox window manager on it to reduce the overhead of the host operating system even further. That has been a great learning curve to show how little GUI you actually need just to get stuff done.

However, the real surprise happened when I spun everything up, was just how fast this little box is. The memory wasn't being used by the media software (Plex) so I thought let's try running a few docker containers on here as well to do other network jobs for me. No issue at all. I currently have the following running on the box 24/7
  1. Pi-hole DNS service
  2. Portainer container manager service
  3. Jenkins job management service
  4. OpenVPN service

So, I thought, I wonder how it would perform if I stuck a few virtual machines on there as well whilst it's doing everything else.

No problem at all, I'm currently running a beta version of Ubuntu and a separate Arch Linux using KVM and QEMU, and it still is not scratching the sides of what this little box can do. It's currently using 8GB of RAM and the load on the server is never above 3, even while everything is running and I'm streaming HD content to other parts of the house.

Considering I used to have tower machines cluttering up my workspace to do this sort of thing, I now have one device to do it all.

Intel have just brought out a new edition of these, with even faster processors and RAM capacity, so I will be getting one of those to work alongside this one when my other Mac Mini dies.

The Mac mini's served me well, but I have now found a better device and with all the cost savings of not buying Apple kit again, I can literally have three for the price of one.

30/01/2022 Update

I have added several more docker containers to the machine to see just how far this can be pushed 

  1. Gitlab for source code control
  2. Plex for home media and music playback
As before with all these services running and pushing the device with streaming and carrying out all it's other functions it is still performing perfectly.  The replacement of the installed plex server to use a docker container is simply one of ease of maintenance. There is nothing wrong with the installed app version, but keeping plex up to date now is just a matter of a "Docker pull" and I've got the latest version - no Library issues etc.

Monday 13 April 2020

Building a small footprint Ubuntu desktop or server for old,singleboard or virtual machines.

So the lockdown offers time to try out things I have put on the back burner for a while. This little project was to build as easily as possible a diminutive Linux install that can be used for many use cases, like single board computers, virtual machines and my older hardware that I use for various tasks. Also offering complete control over what you do and don't install.

I have tried all sorts of Linux distributions, but I think I have found the ideal solution with this one.

Starting with the Ubuntu mini iso this makes the starting point very easy. You can install as much or as little as you like as you go through the installation process. Burn the iso file onto a USB drive or use it directly for your virtual machines. I basically didn't install anything that I didn't need to - especially towards the end when it ask's about GUI desktops - select nothing.

One thing to look out for is when partitioning the disk, whether virtual, SD or SDD don't set up a swapfile - it allocates 500MB on a device with 4GB of ram - which is pretty common these days.

Once all is installed you are presented with a standard command line when you reboot - which can be enough for a lot of people if you are going to run this as a server for some purpose. That takes up around 1.5GB disk space. This could be pruned further if needed, but even with a 16GB SD card, that's not too shabby. Especially compared to a full Gnome Ubuntu install which will eat around 6.5GB.

Now to get a simple working desktop on top of that I recommend using openbox - the following command installs all you need to get going, and give you the desktop above - minus the wallpapers - more of that in a mo.

sudo apt install openbox obconf obmenu vim xterm lightdm lightdm-gtk-greeter tint2 nitrogen ncdu xfce4-terminal arandr
The above is one command on one line.

Reboot your machine and you will be greeted with a login screen - login with the user you set up and you will be presented with a blank screen and a cursor - that is Openbox's starting place - immediately right-click the mouse and select the terminal.

Then carry out the following:
  1. Launch tint2 to give you a panel
  2. Launch arandr to set your video resolution - and save it to a file name to be used later.
  3. Copy any wallpaper from any machine or website using ssh to your users home directory
  4. Launch nitrogen to set that wallpaper you just saved. You can install more later.
  5. Make these changes permanent.
To make option 5 happen:

Create a folder in /home/your username/.config called openbox.
In that directory create a file called autostart.

Add these lines to that file

nitrogen --restore &
tint2 &
/home/your username/.screenlayout/name-you-saved-it-as &

Once you have done this - you are good to go. Logout and back in, and you will have similar to the above image.

Now with the Ubuntu eco-system, you can install anything you like. This can be a Bastion, NFS, Samba, DNS server - whatever.

If you want to make it into a full-function desktop, add Firefox, VLC, Spotify, etc, etc.

However, the base from which you now start is 2.4GB of disk space used, which is the key to this.

You now have complete control over whatever you want to install and make this device into something you have designed and like.

It also minimises your security attack vector - as you have a lot less installed, less to update and less to keep an eye on. This is a massive plus for the whole process.

Updated: 14/4/2020

Saturday 28 March 2020

Getting DNS working with an Ubuntu 20.04 virtual machine installed on an Ubuntu 19.10 host

While taking a look at the next LTS release of Ubuntu - I found after I had spun up the new image in KVM on an Ubuntu 19.10 host that the DNS would not resolve - which scuppered me taking a really good look at it.

Now, they have moved DNS resolution into systemd for a while now, and on the host machine this has not caused me an issue. I have to say though, it appears to me that using systemd to resolve DNS is not only overly complicated but a waste of everyone's time - but I'm sure someone must appreciate the value of it.

I tested that the network was working correctly and the virtual machine could access the DNS resolver if it had been configured correctly with the following command:

dig @

This worked, so I knew that the virt. machine would work if the DNS resolver was working correctly.

So, how to fix the issue. I tried several methods - each trying not to disable the systemd service - but all met with failure with my testing , so in the end I decided to just turn it off and use the tried and trusted /etc/resolv.conf

The commands to achieve this are:

systemctl stop systemd.resolved
systemctl disable systemd.resolved

Then edit the following line in the file 
Then remove the link in /etc
rm resolv.conf
Now create a new resolv.conf in /etc with the name of the nameserver you wish to use i.e.
namesever < or whatever yours is >
The you need to restart the NetworkManager 
sudo systemctl restart NetworkManager
This worked perfectly and the virtual machine is now happily resolving DNS correctly.
None of the above is destructive and can be reversed if the systemd could be made to work, but as this was only a test machine, I decided I had wasted enough time on it.

NB. While working on some more virtual machines I came across this blog post which offers a more elegant solution to this DNS problem for Ubuntu/Debian based distros.It allows you to keep DNS resoltion in systemd - so I can't have been the only person having issues with it.

Solve local DNS issues in Ubuntu and Debian

Wednesday 13 November 2019

How to use pi-hole with a Docker container on your Mac Laptop to stop unwanted internet adverts.

If you are fed up with pointless internet advertising on sites you visit, here is a great additional service you can install on your local machine - or more importantly for your network to stop it dead. I shall not go through what this product is as here is a link - Pi-Hole.

Basically, you need to install the Docker application on you laptop or desktop so that running up the pi-hole docker container is straight forward. You can get docker for Mac here. For the network installation, a Linux server virtual machine or docker container on a machine continuously running would make sense.

Then you need to clone the pi-hole docker git repository to your local machine

Change into that directory and run 

Once the script has run - it will spit out an admin password that you will need to remember to log into the web-based admin screen.

You can look at that by pointing a browser tab at

Once logged in you will see something like this.

 The last part of getting this working on your laptop is to point the DNS resolution of OSX to point to the localhost - as pi-hole is now listening on port 53. Again for network-based installation you would point this at the IP address of your server running the service. You can also then setup that IP address in your routers DHCP settings so any machine on your LAN will get the same protection as they will push all their traffic through the new DNS server.

You can then run a test from the command line to make sure all your DNS requests are going via your new DNS service like so:


;; Query time: 23 msec
;; WHEN: Wed Nov 13 11:16:40 GMT 2019
;; MSG SIZE  rcvd: 139

You will notice the response comes from your machines localhost - so all is working. With the settings of pi-hole you can specify several upstream DNS resolvers which also keeps your DNS queries out of the clutches of Google. There are many options - but I tend to use OpenDNS and