Skip to content

Learning Docker Compose by Self-hosting Monica

Tags: deployment, docker, raspberry-pi • Categories: Learning

Table of Contents

I ran into Monica a while back, which bills itself as an "open source CRM"—an advanced address book. I’ve made it a hobby to meet random people I run into online. I really enjoy meeting new & interesting people, and I thought it would be nice to note down who I’ve met.

This post is a compilation of notes from 1-2 years ago, and at the time, I did not have much experience with Docker (I really just used Heroku for all hosting and deployment before then). Self-hosting Monica on my raspberry pi was a great excuse to go deep and learn a lot about docker compose. Here’s some of what I learned getting Monica to run self-hosted on my raspberry pi.

Docker Compose

  • Let’s get a separate user for running monica on the raspberry pi
    • Create the user sudo adduser monica
    • Add the new user to the docker group sudo usermod -aG docker monica
    • Let’s try to run the monica image as the new user docker run monica
    • The docker command failed. Couldn’t connect to MySQL: now for some debugging!
  • In the monica readme there’s an example docker-compose.yml that installs mysql for you. Let’s try that rather than installing mysql locally on the machine. Here’s the yaml.
    • docker-compose automatically consumes a local docker-compose.yml if it exists. Copy the yml referenced above into the home directory of the new user.
    • You need to generate APP_KEY. Here’s the easiest way to do this: sudo apt-get install pwgen && pwgen -s 32 1 or you could use a password generator from raycast.
    • You’d think you need to modify mysql & data volumes to point a directory in the home folder of the monica user. You don’t: docker will generate a folder on the machine and link it to that volume for you.
    • If you want to inspect the data in the auto-generated volume links docker volume ls and inspect a volume name docker volume inspect mysql.
    • Run docker compose up -d in the same directory that you copy/pasted the docker-compose.yml definition from the readme. The -d runs the containers in the background, running this command without -d is helpful for debugging changes in your yml.
    • You’ll need to modify the mysql image to use a raspberry pi compatible image. hypriot/rpi-mysql is a popular one but I couldn’t get it to work. mariadb is a more-open replacement for mysql and there’s a up-to-date raspberry pi image that ran fine for me.
    • Once your docker compose up command finishes successfully run docker ps to ensure everything is running properly (i.e. not restarting in a loop).
  • I got the containers running, but they were in a failure loop because the application container couldn’t connect to mysql. Let’s figure this out.
    • Inspect logs on the container: docker logs -f $(docker ps -aqf 'name=monica_db').
    • That command is ugly and hard to remember, you can more easily inspect logs using compose: docker compose logs -f --tail=10 db
    • docker compose down to shut down all of the containers
    • depends_on is better than links, which is deprecated. depends_on magically creates hosts on the container that match the name of the linked service. This is why DB_HOST=db magically works.
    • Remove MYSQL_RANDOM_ROOT_PASSWORD and replace with MYSQL_ROOT_PASSWORD
    • docker compose restart is helpful for testing various compose changes
    • Not sure what the services are called? Use docker compose ps --services and plug the values into docker-compose logs -f
    • docker compose run app env to execute arbitrary shell commands within a new container.
    • docker compose exec app bash to execute shell commands within the existing container that is already running.
    • Clear out the container logs sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
  • docker compose is a lower-level version of something like dokku. It feels a lot more intuitive compared to raw docker commands but isn’t as batteries-included as dokku. A great middle ground when you need more control over your infra.
  • container_name can added to the service entry for easier reference when running docker compose {run,exec,...}
  • cat /etc/os-release can give you detailed information about the system version, which is helpful for debugging strange package issues.
  • -f docker-compose-test.yml can be used to specify a non-standard compose yml reference.
  • None of this tinkering was working, the mysql container just kept restarting without any failure logs. I dug into the Dockerfile and it looks like the upstream image that hypriot/rpi-mysql was abandoned. Non-standard images can easily become out of date and you’ll run into weird compatibility issues (especially with obscure system versions that run on the raspberry pi).
    • Here’s how to wipe all docker data. This was helpful to make sure I had a clean slate after experimenting with various approaches.
    • Replaced hypriot/rpi-mysql with jsurf/rpi-mariadb and everything worked great
  • Checked the application via https://raspberrypi.local:8080/ and it loaded up! Very cool.

Here’s the resulting yml running the PHP Monica application and a mysql DB on a raspberry pi:

version: "3.4"

services:
  app:
    image: monica
    depends_on:
      - db
    ports:
      - 8080:80
    environment:
      - APP_KEY=the_generated_key
      - DB_HOST=db
    volumes:
      - data:/var/www/html/storage
    restart: always

  db:
    image: jsurf/rpi-mariadb:latest
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: monica
      MYSQL_USER: homestead
      MYSQL_PASSWORD: secret
    volumes:
      - mysql:/var/lib/mysql
    restart: always

volumes:
  data:
    name: data
  mysql:
    name: mysql

Redis, Reminder Emails, and More

Next, I wanted to make sure reminder emails were sent out.

  • There’s a couple ways to do this, but the most straightforward is to set up a cron process to run.
  • To do that, I need to add redis as the queue store. I just added a simple hypriot/rpi-redis service to my compose definition.
  • To send emails, you also need a connection to a mail server. I already had the configuration for that setup for my drive monitoring so I copy and pasted the values into the MAIL_* variables.

All of this only took a couple minutes. Super cool! Here is the resulting yaml:

version: "3.4"

services:
  app:
    image: monica
    depends_on:
      - db
      - redis
    ports:
      - 8080:80
    environment:
      - APP_KEY=the_key
      - DB_HOST=db
      - REDIS_HOST=redis
      - CACHE_DRIVER=redis
      - QUEUE_CONNECTION=redis
      - MAIL_MAILER=smtp
      - MAIL_HOST=saf
      - MAIL_PORT=465
      - MAIL_USERNAME=username
      - MAIL_PASSWORD=password
      - MAIL_ENCRYPTION=tls
      - MAIL_FROM_ADDRESS=root@raspberrypi.local
      - MAIL_FROM_NAME=Reminders
    volumes:
      - data:/var/www/html/storage
    restart: always

  cron:
    image: monica
    restart: always
    volumes:
      - data:/var/www/html/storage
    command: cron.sh
    depends_on:
      - db
      - redis

  db:
    image: jsurf/rpi-mariadb:latest
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: monica
      MYSQL_USER: homestead
      MYSQL_PASSWORD: secret
    volumes:
      - mysql:/var/lib/mysql
    restart: always

  redis:
    image: hypriot/rpi-redis
    volumes:
      - redis:/data
    restart: always

volumes:
  data:
    name: data
  mysql:
    name: mysql
  redis:

However, we need to share the environment variables between the cron and app service.

  • You can do this by creating a .env file for docker-compose to source.
  • Here’s how the cron is setup in the Dockerfile.
  • The cron.sh script uses busybox to actually run crond which seemed strange to me. It looks like busybox is a rewrite of many common utilities that use a completely consistent API across all platforms.

Here’s the resulting .env file stored in the same file as docker-composer.yml:

# .env
APP_KEY=key
DB_HOST=db
REDIS_HOST=redis
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
MAIL_MAILER=smtp
MAIL_HOST=smtp
MAIL_PORT=465
MAIL_USERNAME=username
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=root@raspberrypi.local
MAIL_FROM_NAME=Reminders

And here’s the docker-compose definition which sources the .env file using the env_file directive:

# docker-compose.yml
version: "3.4"

services:
  app:
    image: monica
    env_file: .env
    depends_on:
      - db
      - redis
    ports:
      - 8080:80
    volumes:
      - data:/var/www/html/storage
    restart: always

  cron:
    image: monica
    env_file: .env
    restart: always
    volumes:
      - data:/var/www/html/storage
    command: cron.sh
    depends_on:
      - db
      - redis

  db:
    image: jsurf/rpi-mariadb:latest
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: monica
      MYSQL_USER: homestead
      MYSQL_PASSWORD: secret
    volumes:
      - mysql:/var/lib/mysql
    restart: always

  redis:
    image: hypriot/rpi-redis
    volumes:
      - redis:/data
    restart: always

volumes:
  data:
    name: data
  mysql:
    name: mysql
  redis:

This isn’t the only way to run multiple processes inside a single container. Docker only allows you one ENTRYPOINT call, but within that command, you can spin up a supervisord service that manages & monitors multiple processes. I’ve always found unix daemons frustrating, so I prefer to manage multiple processes on the docker layer instead.

After making these changes, you can rundocker-compose up -d to reload changes configuration for containers.

Here’s the resulting PR which merged this example upstream.

Automating MySQL Backups

Now with the application running properly, I wanted to make sure that all important data in the app was backed up. Storj is a really S3 compatible, low cost, distributed storage project that you can use for automated MySQL/Postgres backups.

Here’s the final docker-compose file that I used.

services:
  app:
    image: monicahq/monicahq
    env_file: .env
    depends_on:
      - db
    ports:
      - 8080:80
    volumes:
      - data:/var/www/html/storage
    restart: always

  cron:
    image: monicahq/monicahq
    env_file: .env
    restart: always
    volumes:
      - data:/var/www/html/storage
    command: cron.sh
    depends_on:
      - db

  db:
    image: jsurf/rpi-mariadb:latest
    ports:
      - 3306:3306
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: monica
      MYSQL_USER: homestead
      MYSQL_PASSWORD: secret
    volumes:
      - mysql:/var/lib/mysql
    restart: always

  db_backup:
    image: schickling/mysql-backup-s3:latest
    restart: always
    depends_on:
      - db
    environment:
      MYSQL_HOST: db
      MYSQL_USER: homestead
      MYSQL_PASSWORD: secret
      S3_ACCESS_KEY_ID: key
      S3_SECRET_ACCESS_KEY: secret
      S3_BUCKET: bucket
      S3_ENDPOINT: https://gateway.us1.storjshare.io
      SCHEDULE: '@daily'

volumes:
  data:
    name: data
  mysql:
    name: mysql

Remember that depends_on: automatically creates a host within the image with the name of the link. The db_backup docker image can access the mysql DB via db:3306. db is a magic hostname that will resolve properly to the other container that is hosting the database.

Then I applied the changes using:

docker-compose up -d --remove-orphans

And I checked the application via:

docker-compose logs -f --tail=100 --timestamp db_backup

Docker, especially when combined with docker-compose, is a very cool system. It allows you to weave together a graceful system setup with a simple YAML configuration.

Other Compose Tricks & Learnings

  • If you are building a dockerfile from within your docker-compose.yml you can rebuild it via docker compose build
  • docker compose cp is a one-way operation, from container to host, not the other way around!
  • You can use multiple .env files in a single definition:
    todoist-digest:
    image: ghcr.io/iloveitaly/todoist-digest:latest
    restart: always
    env_file:
      - env-mailer
      - env-todoist
  • You can instruct a docker-compose.yml to copy a single file from the local filesystem using volumes:
    todoist-scheduler:
    image: ghcr.io/iloveitaly/todoist-scheduler:latest
    restart: always
    env_file: env-todoist
    volumes:
      - ./scheduler_filters.json:/app/filters.json

Docker Learnings

  • You can copy files out of a container into the host. Helpful for debugging if you don’t have a shared folder setup.
  • ports mapping is structured as host:container. No need to use expose instead of ports
  • depends_on does not expose the linked container to the host, it only exposes relevant ports to the link.
  • ENTRYPOINT ["/bin/sh", "-c"] is the default ENTRYPOINT for all Dockerfiles.
  • docker-compose.override.yml
  • docker inspect --format='{{.LogPath}}' 1a18c15e3703 to get the log path
  • Docker argument positioning continues to mess with me. docker run -it --entrypoint sh go-crond is not the same as docker run go-crond -it --entrypoint sh or any other variation.
  • --platform=$BUILDPLATFORM is passed
  • ARG can be used to set build-time variables like base image variants. You pass them to the build command via --build-arg
  • ARGs specified before FROM are not available to RUN shell commands. The ARG needs to be specified after the FROM. This is very weird and unexpected behavior. More info.
  • You can specify multiple FROMs in a Dockerfile. This is primarily used for a "builder pattern" where your aim is to compile a binary, or generate some build files, and then copy it to the final build image that is more slim.
  • DOCKER_BUILDKIT=0 docker build . exports a layer sha at each step that you can jump into.
  • docker system prune -a will wipe the entire docker cache
  • Use --progress=plain to avoid swallowing all of the output from the build command
  • You can’t use env variables in a COPY
  • SHELL ["/bin/bash", "-c"] to specify a new default shell
  • apt-cache policy PACKAGE will list out package versions available on the local cache of the remote repos.
  • You can set the default platform for builds using the DOCKER_DEFAULT_PLATFORM environment variable
  • dockerignore does not include gitignore, you need to copy over the things you want to exclude.
  • ADD allows pulling data from URLs and extracting archives, otherwise just like COPY
  • SHELL ["/bin/bash", "-eo", "pipefail", "-c"] avoids having to rewrite set on each script run
  • Great example Dockerfile for python application
  • It seems like docker exec on an already running container does NOT rerun the entrypoint script.
  • docker build -t netsuite-connector . --invoke bash will run the invoke command when the build fails. This is super useful for debugging containers.
  • docker system df gives you an idea of where docker is taking up your space.
  • docker compose will not give you an error if a port mapping is already used on the host
  • Containers do not inherit the timezone configuration of host. The easiest way to set the TZ on the container is to set the TZ variable to the timezone definition (i.e., America/Denver).
  • The majority of the time, Docker does an amazing job eliminating the need to think about architecture issues. It emulates different architectures for you. However, if you need to do anything with gdb/strace or other low-level debugging tools that depend on kernel/symbol access, you’ll most likely run into trouble.
  • I’ve been using Orbstack instead of Docker Desktop on my mac and have loved it. Highly recommended!
  • imgcrypt is an interesting project which allows for public distribution of an image but requires a key to run it locally. Similar to sops, but for images.