Custom docker image in AWS ECR used in GitHub Actions

Posted on 20 June 2023 – 3 minute read

Running a test suite in your CI pipeline is critical but I was recently tasked with getting a test suite running without having the luxury of using database factories or seeders for a variety of reasons. Another approach which I decided to use, was to pre-seed a database with test data and create a custom docker image.

This particular project uses MySQL 8.0 for the database and AWS ECR for the container registry.

The image uses a modified base MySQL image. The default base image maps a VOLUME where all of the database data is stored, which under normal circumstances is definitely the desired approach to enable persistent data, however, volumes by their very nature are mapped to locations outside of the container, so when committing any changes to an image, pre-populated database data is ignored. To overcome this, a custom Dockerfile was created.

Moving from GitLab pipeline to GitHub Actions for CI/CD

Posted on 26 August 2022 – 6 minute read

I've been using my self-hosted GitLab instance for my git repos and pipelines for 5 or so years now, but things have changed and I've decided I no longer want to run my self-hosted instance any more. It's purely for hobby/self projects and only accessed by myself, however, one bonus of this is that I also self-host a dockerised GitLab runner that performed all of my CI/CD duties, including the deployment stage, which is the important part. As this was self-hosted and ran on a separate droplet within my VPC on DigitalOcean, connections to to my web droplet was done via an internal IP address across an SSH connection (SSH is blocked to all but my home VPN IP addresses). I didn't want to open up the SSH port to world-and-dog.

Although I'm not running a massive amount of pipeline tasks, GitLab's free tier only offers 400 minutes per month for running CI/CD pipelines. Comparing this to GitHub which offers 2,000 minutes per month, as I'm not doing this as a business, I've decided to move over to GitHub.

Integrating HaveIBeenPwned into Laravel Fortify

Posted on 29 April 2021 – 3 minute read

The HaveIBeenPwned service provided by Troy Hunt contains a whole trove of breach information. It enables you to look up single email address, whole domains and whether a password has been seen in a data breach, for example. It's the latter one that we're interested in for this feature. Let's implement this using the icawebdesign/hibp-php framework-agostic composer package. Install this with…

composer require icawebdesign/hibp-php

HIBP uses a k-anonymity model, meaning that when you request to see if a password is contained in a breach, you don't actually send the plaintext password across the internet, but a subset of a hashed version of it, making it secure.

Laravel, when using Jetstream or Breeze starter-kits uses Fortify under the hood. This also provides a useful authentication system when implementing yourself if you're not using one of the starter kits. It allows creating new users, logging in users and updating users, amongst other things. The part we want to focus on that integrates into all 3 of these sections, though in a single area, is the password validation rules section.

PhpStorm cannot find autoload.php in Docker container under WSL2

Posted on 29 December 2020 – 2 minute read

Docker for Windows has recently improved by utilising the WSL2 backend rather than Hyper-V. The performance gains with this are big, and seeing as I use Windows as my main dev environment, wanted to utilise this myself so made the swap.

I'm primarily a PHP developer, and my IDE of choice is PhpStorm. This in itself works fine, but I encountered a problem when trying to run my PHPUnit tests. The problem is, is that the autoloader.php file doesn't get mounted into or mapped to the PHPUnit container, meaning that the test suite always failed.

This isn't an issue when developing a site where you likely have a PHP container with everything combined (including PHPUnit), so when you run PHPUnit in that case, autoloader.php is already mounted along with the vendor dependencies and everything works as expected, but when you're trying to develop a package for example, where you don't have a complete Docker Compose environment up and running, this causes a problem... but it also lead me to a workaround solution.

You can create a lean docker-compose.yml file definining just a PHP container.

PHP, Docker, Xdebug and the missing local variables

Posted on 21 October 2018 – 2 minute read

I've made a switch in the last 12 months to use Docker as my local development environment for my PHP applications. One of the greatest modules for PHP IMHO, is Xdebug. Many moons ago, I used to frantically hack in 'var_dump(...); exit;' calls to see what a particular variable, object, array, etc contained. This is "OK" and it will do just that, but having a step debugger like Xdebug shows you a much wider picture and enables you to trace into other parts of the code with ease. The other massive advantage, is there's zero chance of missing a var_dump() and commiting it to your repo for deployment!

I'm not sure if this is a PHP7.2 thing (I suspect it is?)... but when putting breakpoints into the code (I use PhpStorm as my preferred IDE), I would see everything, except local variables. I'd see all of the globals, params injected into methods, but no locally initialised variables. This is when running as a 'PHP Web Page' configuration in PhpStorm. When writing tests (we are writing tests, right? =P ) I could put breakpoints in either the code or the test itself and all variables would be visible. I've been banging my head against my desk for a few days with regards to this issue. I've scoured the 'net, read multiple posts on StackOverview and the likes of others encountering the same problem, but also with other IDEs too (such as VSCode). I'd tried just about every xdebug.* PHP setting under the sun to rectify this to no avail.

I did some more digging about this evening and one of the things I tried, was enabling/disabling PHP modules. It didn't take long to discover the culprit doing this... PHP's OpCache module!

I'm guessing that as part of it's internal optimisation, that it does $something with the variables making them not visible within the Xdebug stack. By removing the symlink to the opcache.ini module loader, things are all working as expected.

Hopefully this will help others experiencing the same issue that I've been having in regards to local vars not displaying.

Nginx behind Apache reverse proxy with access restrictions

Posted on 14 March 2018 – 3 minute read

I have a couple of sites that are sitting behind Basic HTTP Authentication restrictions. They're simple applications that don't need full-blown built-in access control and are served over HTTPS, so Basic HTTP Auth serves just fine as a restriction control.

However, I do like to be able to access these either from internal, or specific static IP addresses without the need to have to log in all of the time, so I configured Apache in such a way that this can be achieved, for example, within a <Directory...></Directory> block:

Order deny,allow
Deny from all
AuthName "Restricted"
AuthUserFile /path/to/.htpasswd
AuthType Basic
Require valid-user
Allow from
Satisfy Any

This will then prompt for Basic HTTP Auth credentials unless the client IP address is one of the listed IP addresses / ranges in the Allow from... list.

GitLab CE, Docker, PHP7.1, Laravel 5.5, SQLite CI pipeline

Posted on 02 December 2017 – 5 minute read

I've been coding in PHP for more than 15 years using a variety of environments; Windows, FreeBSD, Linux, MacOS, VMWare+Vagrant+Linux, but more recently, I've been wanting to make the move to Docker.

I've also gone through various methods of working with code bases, from duplicating directories and incrementing version numbers, SVN and now Git, but unlike many, I don't much use GitHub for personal projects and prefer my self-hosted instance of GitLab CE. Git and GitLab have worked fine for some time, but I've recently started thinking about CI. This posed a few issues, but I managed to get it running with GitLab CI using a shell executor. This was OK, it's my own code running on a server I run and maintain myself in my home office, but what if I want to run things under different circumstances, PHP version for example? Welcome Docker =)

I started with a PHP image from phpdockerio, shelled into the container, added some modules etc, logged out of the container and selected to use my local image. Many of you already fluent in Docker will quickly realise this doesn't work! You need more than that... being the Docker n00b that I am, I started to search the interwebz as to why my updates hadn't survived... doh! I need to commit my changes.

WoSign / StartCom SSL certs soon to be worthless in Google Chrome

Posted on 08 July 2017 – 1 minute read

As has been announced for a while, Google, Mozilla and Apple have been slowly dropping the trust level on certificates provided by WoSign / StartCom. This was primarily due to WoSign backdating certificates allowing people to continue to use SHA-1, a known insecure hashing algorithm.

Things are about to get real, with all certificates and any whitelisting that had been put in place by Google in their Chrome browser will become fully distrusted, according to Devon O'Brien, Chrome's security engineer.

"Beginning with Chrome 61, the whitelist will be removed, resulting in full distrust of the existing WoSign and StartCom root certificates and all certificates they have issued," O'Brien said. "Based on the Chromium Development Calendar, this change should be visible in the Chrome Dev channel in the coming weeks, the Chrome Beta channel around late July 2017, and will be released to Stable around mid September 2017."

O'Brien advised sites still using certificates issued by WoSign / StartCom to "consider replacing these certificates as a matter of urgency to minimize disruption for Chrome users.

Downgrade Yubioath Desktop from 4.x to 3.1.0 for Linux

Posted on 06 April 2017 – 3 minute read

I love my Yubikey Neo and use it many, many times daily on all 3 platforms (Linux, Mac and Windows). My primary OSes are Linux for my personal Lenovo laptop and my work-issued Macbook Pro. I recently upgraded Yubioath Desktop on my Linux box running Linux Mint to v4.0.1. This upgrade touted an improved user-interface amongst other fixes and improvements.. woohoo I thought to myself as although the previous version (3.1.0) was a fair improvement over 3.0.x, it wasn't without its issues.

The install went smooth as silk as was using their Ubuntu / Debian PPA to it was a simple case of apt upgrade yubioath-desktop. After the upgrade had completed, I ran it from the desktop and a shiny new application opened. At first glance, this looked good, but it wasn't long before IMO, there was a glaring issue... the items in the list were in some kind of who-knows-what illogical order. Version 4.x, like 3.1.0 does have a filter bar at the bottom, but one of my frequently used items is 'Amazon', which was always the first entry in the list, so I didn't need the filter bar for that. Right now, in version 4.0.1, I have no idea what position my Amazon item is in, only that it's buried deep somewhere in amongst the items.

UniFi Video G3 Camera CCTV home setup

Posted on 13 March 2017 – 6 minute read

Following on from a recent post on my new Ubiquiti UniFi networking setup, I decided to write a separate post about my UniFi G3 CCTV home setup too.

I originally had an Annke system. This was “OK” for a while, but the cloud viewing component for my “iThings” used XmEye. This was apparently open to issues with the Mirai IoT botnet malware. I immediately disconnected the ethernet cable from the back of the control box and made sure that no wireless connection could be made from it to the outside world.

I had got quite accustomed to having the knowledge that CCTV was protecting my home to some degree and decided to put all my eggs in one basket for want of a better term and went with the G3 units from Ubiquiti. The immediate specs of interest for these are:

  • 1080p HD resolution
  • 30fps
  • Built-in microphone
  • PoE installation
  • IR night vision

The Annke system didn't have audio, so this was a bonus for me, plus they can record in full 1080p HD definition, better than the 720p of my previous setup. One other big bonus over the previous kit, was that they were powered via PoE so I could plug them directly into my UniFi 8 port 150w switch without requiring any extra cables.