Custom docker image in AWS ECR used in GitHub Actions

Posted on 20 June 2023
3 minute read

Running a test suite in your CI pipeline is critical but I was recently tasked with getting a test suite running without having the luxury of using database factories or seeders for a variety of reasons. Another approach which I decided to use, was to pre-seed a database with test data and create a custom docker image.

This particular project uses MySQL 8.0 for the database and AWS ECR for the container registry.

The image uses a modified base MySQL image. The default base image maps a VOLUME where all of the database data is stored, which under normal circumstances is definitely the desired approach to enable persistent data, however, volumes by their very nature are mapped to locations outside of the container, so when committing any changes to an image, pre-populated database data is ignored. To overcome this, a custom Dockerfile was created to build the image with:

FROM mysql:debian
RUN mkdir /var/lib/mysql-no-volume
CMD ["--datadir", "/var/lib/mysql-no-volume"]

This specifies a new datadir where the database data is stored, which when committed, is kept with the image. This was then created as a base image:

docker build -t .

Now that we have a base empty database image, we can run this:

docker run --name testdb

With the container running, the test database can be created and the test database schema can be imported.

Once the database has been created and the import has completed, a new image can be created.

docker ps

This will display the running containers; eg:

e6e83dbfc37d "docker-entrypoint.s…" 11 seconds ago Up 10 seconds 3306/tcp, 33060/tcp testdb

The part we want from this, is the CONTAINER ID. Next, we can commit these changes to create a new image / tag:

docker commit e6e83dbfc37d

This image can now be pushed:

docker push

Now that we have our populated test database, we need to add to / create a new GitHub Actions workflow.

Firstly, we need to configure the AWS ECR credentials as a job (ACCESS_KEY and SECRET_ACCESS_KEY should be defined in your repo actions secrets):

    runs-on: ubuntu-20.04
      - name: Configure AWS credentials in shared account
        uses: aws-actions/configure-aws-credentials@v2
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-west-2
          role-to-assume: arn:aws:iam::123456789012:role/testdb
          role-duration-seconds: 3600
          role-skip-session-tagging: true
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1
      docker_username: ${{ steps.login-ecr.outputs.docker_username_123456789012_dkr_ecr_eu_west_2_amazonaws_com }}
      docker_password: ${{ steps.login-ecr.outputs.docker_password_123456789012_dkr_ecr_eu_west_2_amazonaws_com }}

This stores the resulting ECR username and password in the docker_username and docker_password respectively that can be used in another job. For us, this will be a tests job:

    needs: aws-ecr-login
    name: Backend tests
    runs-on: ubuntu-20.04

          username: ${{ }}
          password: ${{ }}
          - 3306:3306
        options: >-
          --health-cmd="mysqladmin ping"

The database image will now be pulled from AWS ECR and expose port 3306 which can be accessed on by your tests.