Custom docker image in AWS ECR used in GitHub Actions

Posted on 20 June 2023
3 minute read

Running a test suite in your CI pipeline is critical but I was recently tasked with getting a test suite running without having the luxury of using database factories or seeders for a variety of reasons. Another approach which I decided to use, was to pre-seed a database with test data and create a custom docker image.

This particular project uses MySQL 8.0 for the database and AWS ECR for the container registry.

The image uses a modified base MySQL image. The default base image maps a VOLUME where all of the database data is stored, which under normal circumstances is definitely the desired approach to enable persistent data, however, volumes by their very nature are mapped to locations outside of the container, so when committing any changes to an image, pre-populated database data is ignored. To overcome this, a custom Dockerfile was created to build the image with:

FROM mysql:debian
RUN mkdir /var/lib/mysql-no-volume
CMD ["--datadir", "/var/lib/mysql-no-volume"]

This specifies a new datadir where the database data is stored, which when committed, is kept with the image. This was then created as a base image:

docker build -t 123456789012.dkr.ecr.eu-west-2.amazonaws.com/project/testdb:base .

Now that we have a base empty database image, we can run this:

docker run --name testdb -e MYSQL_ALLOW_EMPTY_PASSWORD=true 123456789012.dkr.ecr.eu-west-2.amazonaws.com/project/testdb:base

With the container running, the test database can be created and the test database schema can be imported.

Once the database has been created and the import has completed, a new image can be created.

docker ps

This will display the running containers; eg:

e6e83dbfc37d 123456789012.dkr.ecr.eu-west-2.amazonaws.com/project/testdb:base "docker-entrypoint.s…" 11 seconds ago Up 10 seconds 3306/tcp, 33060/tcp testdb

The part we want from this, is the CONTAINER ID. Next, we can commit these changes to create a new image / tag:

docker commit e6e83dbfc37d 123456789012.dkr.ecr.eu-west-2.amazonaws.com/project/testdb:1.0.0

This image can now be pushed:

docker push 123456789012.dkr.ecr.eu-west-2.amazonaws.com/project/testdb:1.0.0

Now that we have our populated test database, we need to add to / create a new GitHub Actions workflow.

Firstly, we need to configure the AWS ECR credentials as a job (ACCESS_KEY and SECRET_ACCESS_KEY should be defined in your repo actions secrets):

  aws-ecr-login:
    runs-on: ubuntu-20.04
    steps:
      - name: Configure AWS credentials in shared account
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-west-2
          role-to-assume: arn:aws:iam::123456789012:role/testdb
          role-duration-seconds: 3600
          role-skip-session-tagging: true
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1
    outputs:
      docker_username: ${{ steps.login-ecr.outputs.docker_username_123456789012_dkr_ecr_eu_west_2_amazonaws_com }}
      docker_password: ${{ steps.login-ecr.outputs.docker_password_123456789012_dkr_ecr_eu_west_2_amazonaws_com }}

This stores the resulting ECR username and password in the docker_username and docker_password respectively that can be used in another job. For us, this will be a tests job:

  tests:
    needs: aws-ecr-login
    name: Backend tests
    runs-on: ubuntu-20.04

    services:
      database-service:
        image: 123456789012.dkr.ecr.eu-west-2.amazonaws.com/project/testdb:1.0.0
        credentials:
          username: ${{ needs.aws-ecr-login.outputs.docker_username }}
          password: ${{ needs.aws-ecr-login.outputs.docker_password }}
        env:
          MYSQL_ROOT_PASSWORD: ${{ secrets.CI_MYSQL_ROOT_PASSWORD }}
        ports:
          - 3306:3306
        options: >-
          --health-cmd="mysqladmin ping"
          --health-interval=10s
          --health-timeout=5s
          --health-retries=3

The database image will now be pulled from AWS ECR and expose port 3306 which can be accessed on 127.0.0.1 by your tests.