Using Ansible to Setup Custom Dotfiles
Tags: ansible, deployment • Categories: Learning
A while back I posted about using Ansible for an Elixir application. I recently wanted to update this application and add a couple of new features:
- Add my dotfiles to the server in a similar way to GitHub codespaces (using this role)
- Add brew, mostly for my dotfiles to work
- Automate more of the dokku-based setup
Here’s the heavily commented ansible config for a dokku-based application (not specific to Elixir):
---
- hosts: all
# executes all tasks as a privileged user
become: true
# roles exist in ~/.ansible/roles
# roles are executed in the order they are specified
roles:
- dokku_bot.ansible_dokku
- geerlingguy.swap
- markosamuli.linuxbrew
- iloveitaly.dotfiles_bootstrap
vars:
# if you need to adjust swap, you'll need to remove and then reset
# swap_file_state: absent
swap_file_size_mb: '5120'
# use_installer
avoids cloning the entire repo, which can take a long time and is very large
linuxbrew_use_installer: true
linuxbrew_init_shell: false
linuxbrew_home: "/home/{{ ansible_user }}"
# the bootstrap.sh script will be run on the linked dotfiles
dotfiles_repo: "https://github.com/iloveitaly/dotfiles.git"
dotfiles_repo_local_destination: "/home/{{ ansible_user }}/dotfiles"
# https://github.com/dokku/dokku/releases
dokku_version: 0.31.3
# NOTE if you run into issues, you can set the versions specifically
# herokuish_version: 0.5.18
# plugn_version: 0.5.0
# sshcommand_version: 0.11.0
# these are not SSH users, but dokku access users which enable you to git push dokku HEAD
dokku_users:
- name: mbianco
username: mbianco
ssh_key: "{{lookup('file', '~/.ssh/id_rsa.pub')}}"
dokku_plugins:
- name: clone
url: https://github.com/crisward/dokku-clone.git
- name: letsencrypt
url: https://github.com/dokku/dokku-letsencrypt.git
- name: postgres
url: https://github.com/dokku/dokku-postgres.git
tasks:
- name: create app
dokku_app:
# change this name in your template!
app: &appname app
- name: environment configuration
dokku_config:
app: *appname
config:
MIX_ENV: prod
SENTRY_DSN: "{{ sentry_dsn }}"
# specify port so domains
can setup the port mapping properly
PORT: "5000"
vars:
# encrypted variables need to be stored here in vars
and then pulled into config
via {{ var_name }}
sentry_dsn: !vault |
123
- name: add domain
dokku_domains:
app: *appname
state: set
domains:
- thedomain.com
- www.thedomain.com
- name: clear global domains (dokku.me)
dokku_domains:
state: clear
global: True
domains: []
- name: postgres setup
block:
- name: postgres:create default
# https://stackoverflow.com/questions/27733511/how-to-set-linux-environment-variables-with-ansible
environment:
POSTGRES_IMAGE: postgis/postgis
POSTGRES_IMAGE_VERSION: 16-master
dokku_service_create:
name: default
service: postgres
- name: Check if postgres is already exposed
ansible.builtin.shell: dokku postgres:info default
register: postgres_info
ignore_errors: true
- name: expose postgres to the local machine so it can be accessed via ssh
ansible.builtin.shell: dokku postgres:expose default
when: "'Exposed ports:$' in postgres_info.stdout_lines | join(' ')"
- name: postgres:link default &appname
dokku_service_link:
app: *appname
name: default
service: postgres
# https://github.com/dokku/ansible-dokku/pull/49
- name: set letsencrypt email and request certificate
block:
# TODO app
is hardcoded here, but I'm too lazy to fix this upstream right now
# https://github.com/dokku/ansible-dokku/issues/52
- name: Set letsencrypt email
ansible.builtin.shell: dokku letsencrypt:set app email hello@thedomain.com
- name: letsencrypt
dokku_letsencrypt:
app: *appname
# NOTE you'll need to git push
once this is all setup
Ansible learnings
Some additional learnings this time around:
lookup('env','HOME')
is the directory of the environment that ansible is executed from, i.e. your computer directory. Same goes withlookup('file', '~/something')
. It is not the home directory of theansible_user
- If
become: false
then~
in a path which is used on the server will be expanded to/home/root
ansible.builtin.shell
usessh
by defaulf, notbash
.- Roles are executed in the order they are specified in the playbook, then tasks are executed.
- When running Ansible tests via molecule you can ignore specific yaml blocks with
tags: [molecule-idempotence-notest]
Automated Ansible Test & Deployment Using GitHub Actions
As part of this work, I created my first role and published it on Galaxy (the package system for Ansible). Here’s the full GitHub action to test the role (ansible package), generate a changelog, and publish it to Galaxy.
name: Test, Build, and Publish
on:
push:
branches:
- "main"
jobs:
test:
runs-on: ubuntu-latest
container:
image: rsprta/molecule-runner
options: --privileged
strategy:
matrix:
MOLECULE_IMAGE: ["rsprta/debian-ansible:bullseye"]
MOLECULE_SCENARIO: ["default"]
env:
PY_COLORS: 1
ANSIBLE_FORCE_COLOR: 1
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Environment
run: |
docker -v
python3 -V
ansible --version
molecule --version
- name: Run Molecule Test
run: molecule test -s "${{ matrix.MOLECULE_SCENARIO }}"
release:
runs-on: ubuntu-latest
needs: test
permissions:
contents: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Build Changelog
id: changelog
uses: TriPSs/conventional-changelog-action@v4
with:
github-token: ${{ secrets.github_token }}
fallback-version: "0.1.0"
skip-version-file: "true"
output-file: "CHANGELOG.md"
- name: Ansible Galaxy Publish
if: ${{steps.changelog.outputs.skipped == 'false'}}
# gh secret set GALAXY_TOKEN --app actions --body $GALAXY_TOKEN
run: ansible-galaxy role import --token ${{ secrets.GALAXY_TOKEN }} ${{ github.actor }} ${{ github.event.repository.name }}
Debugging Ansible Role Imports
Ansible Galaxy calls package publishing "role importing". I was getting this very strange error when importing a role (both locally and on GitHub actions):
Successfully submitted import request 2050688187775608002605649345200875587
running
unknown field in galaxy_info
File "/venv/lib64/python3.11/site-packages/pulpcore/tasking/tasks.py", line 66, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/galaxy_ng/app/api/v1/tasks.py", line 127, in legacy_role_import
result = import_legacy_role(checkout_path, namespace.name, importer_config, logger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib64/python3.11/site-packages/galaxy_importer/legacy_role.py", line 51, in import_legacy_role
return _import_legacy_role(dirname, namespace, cfg, logger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib64/python3.11/site-packages/galaxy_importer/legacy_role.py", line 57, in _import_legacy_role
data = LegacyRoleLoader(dirname, namespace, cfg, logger).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib64/python3.11/site-packages/galaxy_importer/loaders/legacy_role.py", line 41, in load
self.metadata = self._load_metadata()
^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib64/python3.11/site-packages/galaxy_importer/loaders/legacy_role.py", line 77, in _load_metadata
return schema.LegacyMetadata.parse(meta_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib64/python3.11/site-packages/galaxy_importer/schema.py", line 522, in parse
raise exc.LegacyRoleSchemaError("unknown field in galaxy_info") from e
After searching the entire computer (fd --unrestricted --full-path "/tasking/tasks.py" /
) and being unable to find a matching file referenced in this stack trace, I finally realized that it could be stacktrace from the galaxy API. That would be an insane thing to do: return a full, raw, stack trace from a server I don’t control in a CLI application running on my computer that looks exactly like a standard python stacktrace.
But, that’s exactly what was happening.
I removed the namespace
field from galaxy_info
and all was well.