Skip to content

Using Ansible to Setup Custom Dotfiles

Tags: ansible, deployment • Categories: Learning

Table of Contents

A while back I posted about using Ansible for an Elixir application. I recently wanted to update this application and add a couple of new features:

  • Add my dotfiles to the server in a similar way to GitHub codespaces (using this role)
  • Add brew, mostly for my dotfiles to work
  • Automate more of the dokku-based setup

Here’s the heavily commented ansible config for a dokku-based application (not specific to Elixir):

- hosts: all
  # executes all tasks as a privileged user
  become: true
  # roles exist in ~/.ansible/roles
  # roles are executed in the order they are specified
    - dokku_bot.ansible_dokku
    - geerlingguy.swap
    - markosamuli.linuxbrew
    - iloveitaly.dotfiles_bootstrap
    # if you need to adjust swap, you'll need to remove and then reset
    # swap_file_state: absent

    swap_file_size_mb: '5120'

    # use_installer avoids cloning the entire repo, which can take a long time and is very large
    linuxbrew_use_installer: true
    linuxbrew_init_shell: false
    linuxbrew_home: "/home/{{ ansible_user }}"

    # the script will be run on the linked dotfiles
    dotfiles_repo: ""
    dotfiles_repo_local_destination: "/home/{{ ansible_user }}/dotfiles"

    dokku_version: 0.31.3

    # NOTE if you run into issues, you can set the versions specifically
    # herokuish_version: 0.5.18
    # plugn_version: 0.5.0
    # sshcommand_version: 0.11.0

    # these are not SSH users, but dokku access users which enable you to git push dokku HEAD
      - name: mbianco
        username: mbianco
        ssh_key: "{{lookup('file', '~/.ssh/')}}"

      - name: clone
      - name: letsencrypt
      - name: postgres

    - name: create app
        # change this name in your template!
        app: &appname app
    - name: environment configuration
        app: *appname
          MIX_ENV: prod
          SENTRY_DSN: "{{ sentry_dsn }}"
          # specify port so domains can setup the port mapping properly
          PORT: "5000"

        # encrypted variables need to be stored here in vars and then pulled into config via {{ var_name }}
        sentry_dsn: !vault |

    - name: add domain
        app: *appname
        state: set

    - name: clear global domains (
        state: clear
        global: True
        domains: []

    - name: postgres setup
        - name: postgres:create default
            POSTGRES_IMAGE: postgis/postgis
            POSTGRES_IMAGE_VERSION: 16-master
            name: default
            service: postgres

        - name: Check if postgres is already exposed
 dokku postgres:info default
          register: postgres_info
          ignore_errors: true

        - name: expose postgres to the local machine so it can be accessed via ssh
 dokku postgres:expose default
          when: "'Exposed ports:$' in postgres_info.stdout_lines | join(' ')"

    - name: postgres:link default &appname
        app: *appname
        name: default
        service: postgres

    - name: set letsencrypt email and request certificate
        # TODO app is hardcoded here, but I'm too lazy to fix this upstream right now
        - name: Set letsencrypt email
 dokku letsencrypt:set app email

        - name: letsencrypt
            app: *appname

    # NOTE you'll need to git push once this is all setup

Ansible learnings

Some additional learnings this time around:

  • lookup('env','HOME') is the directory of the environment that ansible is executed from, i.e. your computer directory. Same goes with lookup('file', '~/something'). It is not the home directory of the ansible_user
  • If become: false then ~ in a path which is used on the server will be expanded to /home/root
  • uses sh by defaulf, not bash.
  • Roles are executed in the order they are specified in the playbook, then tasks are executed.
  • When running Ansible tests via molecule you can ignore specific yaml blocks with tags: [molecule-idempotence-notest]

Automated Ansible Test & Deployment Using GitHub Actions

As part of this work, I created my first role and published it on Galaxy (the package system for Ansible). Here’s the full GitHub action to test the role (ansible package), generate a changelog, and publish it to Galaxy.

name: Test, Build, and Publish

      - "main"

    runs-on: ubuntu-latest
      image: rsprta/molecule-runner
      options: --privileged
        MOLECULE_IMAGE: ["rsprta/debian-ansible:bullseye"]
        MOLECULE_SCENARIO: ["default"]
      PY_COLORS: 1
      - name: Checkout
        uses: actions/checkout@v4
      - name: Setup Environment
        run: |
          docker -v
          python3 -V
          ansible --version
          molecule --version
      - name: Run Molecule Test
        run: molecule test -s "${{ matrix.MOLECULE_SCENARIO }}"

    runs-on: ubuntu-latest
    needs: test
      contents: write

      - name: Checkout
        uses: actions/checkout@v4

      - name: Build Changelog
        id: changelog
        uses: TriPSs/conventional-changelog-action@v4
          github-token: ${{ secrets.github_token }}
          fallback-version: "0.1.0"
          skip-version-file: "true"
          output-file: ""

      - name: Ansible Galaxy Publish
        if: ${{steps.changelog.outputs.skipped == 'false'}}
        # gh secret set GALAXY_TOKEN --app actions --body $GALAXY_TOKEN
        run: ansible-galaxy role import --token ${{ secrets.GALAXY_TOKEN }} ${{ }} ${{ }}

Debugging Ansible Role Imports

Ansible Galaxy calls package publishing "role importing". I was getting this very strange error when importing a role (both locally and on GitHub actions):

Successfully submitted import request 2050688187775608002605649345200875587
unknown field in galaxy_info
  File "/venv/lib64/python3.11/site-packages/pulpcore/tasking/", line 66, in _execute_task
    result = func(*args, **kwargs)
  File "/app/galaxy_ng/app/api/v1/", line 127, in legacy_role_import
    result = import_legacy_role(checkout_path,, importer_config, logger)
  File "/venv/lib64/python3.11/site-packages/galaxy_importer/", line 51, in import_legacy_role
    return _import_legacy_role(dirname, namespace, cfg, logger)
  File "/venv/lib64/python3.11/site-packages/galaxy_importer/", line 57, in _import_legacy_role
    data = LegacyRoleLoader(dirname, namespace, cfg, logger).load()
  File "/venv/lib64/python3.11/site-packages/galaxy_importer/loaders/", line 41, in load
    self.metadata = self._load_metadata()
  File "/venv/lib64/python3.11/site-packages/galaxy_importer/loaders/", line 77, in _load_metadata
    return schema.LegacyMetadata.parse(meta_path)
  File "/venv/lib64/python3.11/site-packages/galaxy_importer/", line 522, in parse
    raise exc.LegacyRoleSchemaError("unknown field in galaxy_info") from e

After searching the entire computer (fd --unrestricted --full-path "/tasking/" /) and being unable to find a matching file referenced in this stack trace, I finally realized that it could be stacktrace from the galaxy API. That would be an insane thing to do: return a full, raw, stack trace from a server I don’t control in a CLI application running on my computer that looks exactly like a standard python stacktrace.

But, that’s exactly what was happening.

I removed the namespace field from galaxy_info and all was well.