Scripting macOS with Javascript Automation

I've been playing with ActivityWatch, a really neat open-source application to track what you are doing when you are on your computer. Similar to rescue time RescueTime, but open source, with some more advanced features. I've been using it for a couple of months as part of my digital minimalism toolkit and it's worked great to give me an idea of what's taking up my time.

There's been a couple of things that have bugged me about the application, and it's written in a couple of languages I've wanted to learn (Rust, Vue), so I decided to make a couple of changes as part of a learning project.

I ended up needing to modify an AppleScript and ran into macOS's Javascript Automation for the first time. It's a really powerful system but horribly documented, with very little open source code to learn from.

Retrieving the Active Application, Window, and URL using AppleScript

I wanted to extract the active application, title of the main window, and the URL of that window (if the active application is a browser). I found this AppleScript, which was close to what I wanted, but I also wanted to identify the main window if a non-browser was in use:

global frontApp, frontAppName, windowTitle set windowTitle to "" tell application "System Events" set frontApp to first application process whose frontmost is true set frontAppName to name of frontApp tell process frontAppName try tell (1st window whose value of attribute "AXMain" is true) set windowTitle to value of attribute "AXTitle" end tell end try end tell end tell do shell script "echo " & "\"\\\"" & frontAppName & "\\\",\\\"" & windowTitle & "\\\"\""

Here's what combining these two scripts looks like in Javascript for Automation:

var seApp = Application("System Events"); var oProcess = seApp.processes.whose({frontmost: true})[0]; var appName = oProcess.displayedName(); // these are set to undefined for a specific reason, read more below! var url = undefined, incognito = undefined, title = undefined; switch(appName) { case "Safari": url = Application(appName).documents[0].url(); title = Application(appName).documents[0].name(); break; case "Google Chrome": case "Google Chrome Canary": case "Chromium": case "Brave Browser": const activeWindow = Application(appName).windows[0]; const activeTab = activeWindow.activeTab(); url = activeTab.url(); title = activeTab.name(); break; default: mainWindow = oProcess. windows(). find(w => w.attributes.byName("AXMain").value() === true) // in some cases, the primary window of an application may not be found // this occurs rarely and seems to be triggered by switching to a different application if(mainWindow) { title = mainWindow. attributes. byName("AXTitle"). value() } } JSON.stringify({ app: appName, url, title, incognito });

Some notes & learnings that help explain the above code:

You can write & test JXA from the "Script Editor" application. You can connect the script editor to Safari for a full-blown debugger experience, which is neat. Open up a REPL via osascript -il JavaScript There's not really an API reference anywhere. The best alternative is Script Editor -> File -> Open Dictionary. The javascript runtime is definitely heavily modified: Object.getOwnPropertyNames returns __private__ for all of the system events-related objects. This makes it much harder to poke around in a repl to determine what methods are available to you. Use #!/usr/bin/osascript -l JavaScript at the top of your jxa to run a script directly in your terminal. whose only seems to work with properties, not attributes. If you want to filter on attributes you need to iterate over each element: windows().find(w => w.attributes.byName("AXMain").value() === true) Application objects seem to use some sort of query ORM-type model underneath the hood. The application only seems to execute queries when value() or another value is requested, otherwise you'll just get a reference to the query that could retrieve the object. This makes it harder to poke at the objects in a repl. If you compile a script once and rerun it, you must reset your variables to undefined otherwise the values they were set to will stick around. This is why all var declarations above are set to undefined. You can import objc libraries and use them in your JXA.

It's worth noting that some folks online mention that JXA is dead, although not deprecated. I think this is a general state on macOS scripting (including AppleScript): Apple has built some very neat technologies but has done a horrible job at continuing to develop and evangelize them so they have many sharp edges and there is sparse documentation out there.

Executing Javascript Automation Scripts from Python

A powerful aspect of the python ecosystem is PyObjc which enables you to reach into the macOS Objective-C APIs within a python script. In this case, this allows you to compile & run applescript/javascript from within a python script without shelling out to osascript. This improves performance, but also makes it much easier to detect errors and parse output from the script.

The snippet below was adapter from this StackOverflow post and requires that you pip install pyobjc-framework-OSAKit :

script = None def compileScript(): from OSAKit import OSAScript, OSALanguage scriptPath = "path/to/file.jxa" scriptContents = open(scriptPath, mode="r").read() javascriptLanguage = OSALanguage.languageForName_("JavaScript") script = OSAScript.alloc().initWithSource_language_(scriptContents, javascriptLanguage) (success, err) = script.compileAndReturnError_(None) # should only occur if jxa is incorrectly written if not success: raise Exception("error compiling jxa script") return script def execute(): # use a global variable to cache the compiled script for performance global script if not script: script = compileScript() (result, err) = script.executeAndReturnError_(None) if err: raise Exception("jxa error: {}".format(err["NSLocalizedDescription"])) # assumes your jxa script returns JSON as described in the above example return json.loads(result.stringValue())

Here's the structure of an AppleScript err after executing the script:

{ NSLocalizedDescription = "Error: Error: Can't get object."; NSLocalizedFailureReason = "Error: Error: Can't get object."; OSAScriptErrorBriefMessageKey = "Error: Error: Can't get object."; OSAScriptErrorMessageKey = "Error: Error: Can't get object."; OSAScriptErrorNumberKey = "-1728"; OSAScriptErrorRangeKey = "NSRange: {0, 0}"; }

Here are some tips and tricks for working with pyobjc in python:

Always pass None for objc reference arguments. References are returned in a tuple instead. You can see this in the above code ((result, err) = script.executeAndReturnError_(None)): result is the return value of the method, while err is reference argument passed as None in : is replaced by _ in the method signatures There's a separate package for each objc framework. Import only what you need to avoid application bloat. Objc keyword arguments are transformed into positional arguments, not python keyword arguments. I ran into weird initialization errors if I had pyobj calls in the global namespace (for instance, caching the script immediately as opposed to setting script = None). I'm not sure if this was specific to how the rest of the application I was working in was structured. Resources

Here are some helpful resources I ran into when

Best group open source example scripts I could find: https://github.com/voostindie/vincents-productivity-suite-for-alfred Not sure why, but this forum has a lot of good sample code to copy from. https://forum.keyboardmaestro.com https://apple-dev.groups.io/g/jxa/wiki/3202 Some helpful snippets & usage examples https://gist.github.com/heckj/5b7bb332463a762639e179a37ea3a216 Official Apple release notes which a nice group of snippets. A great technical deep dive with links to many interesting resources

Continue Reading

Migrating from bash to zsh

I love productivity tools. Anyone who works with me knows I love my keyboard shortcuts and tiny productivity hacks. Small, incremental productivity improvements add up over time: feeling fast makes you fast. Plus, I just enjoy tinkering and making things more productive.

One of the rabbit holes I love to go down is optimizing my development environment. I spend a lot of time in a terminal, so it's a fun place to optimize my setup. Ever since hearing of Oh My ZSH I wanted to try out zsh, so I set aside some time to update my dotfiles to use zsh as the default shell.

Below are some notes & learnings from the transition.

What's new in zsh? There are lots of small packages out there for neat things like autocomplete, async prompts, etc. This is the best part about zsh and the main reason I put the effort into switching. There's a bunch of configuration managers out there. Oh My ZSH, zplug, antigen, antibody, zinit, etc. These managers pull various bundles of zsh scripts together and source them for you. Antibody was the best manager that I could find (when I originally wrote this post in 2020). Allows you to pull directly from GitHub repositories, and load shell scripts that aren't packaged as a "plugin". However, in less than a year it died out and is unmaintained. Here's my plugin list with antibody Zinit looks like the best package manager nowadays (2021). Here's how I moved from antibody to zinit and the change that enabled turbo mode. The syntax is strange. ice is a command that modifies the next command (why not just add a modifier to the command itself? Who knows.) for allows you to execute a command as a loop (like you'd expect) without having to separate ice from the actual command. Helpful if you don't need separate ice modifiers for each command lucid eliminates the loading messages. Not sure why this isn't enabled by default. I found this example setup to be the most helpful in decoding the zinit syntax. zi update to update all plugins Packaging something as a plugin is super simple. Create a name.plugin.zsh file in your repo. This file is autoloaded by plugin managers. I've always struggled to understand where I can map key pressed to the strange double-bracket definitions I see (e.g. ^[[A] is equivalent to the up arrow key). Run /bin/cat -v and when you press a key it'll output the key definition you can use in key bindings. There are many options for up/down history matching. I like the substring search package, but there are great builtins for this as well There are many little changes to the shell which make life easier. For instance, mv something.{js,ts} will rename a file. zsh variables have different types. Run type var_name to inspect types of various variables. zsh line editor is zle. zle -N widget-name adds the widget to the line editor so you can bindkey the widget. bindkey lists out all of your keyboard shortcuts zle -la lists out all 'widgets' (zsh commands, not sure why they are called widgets). You can bind keyboard sequences to these widgets. The edit-command-line widget 'parks' the current command until the next command you type is done executing. Here's how to bind to ctrl-e (the default ctrl-q binding wasn't working for me). Function path is fpath, the list of paths to search for the definition of a function definition. This is distinct from $PATH in zsh. A big improvement with zsh is the ability to async run commands. For instance, you can display your prompt and then run various git commands and update your prompt status. This is critical for large repos (where git commands can take seconds to run) and is the main reason I switched to zsh. <<< is a here string. Useful for passing a string to stdin (echo 'hi' | cat is equal to cat <<< 'hi'). zsh also has here docs with the standard <<EOL syntax. Nifty command to list out all autocompletions. zinit also has a similar (cleaner) command zi clist. Snippet to list aliases, functions, and variables. Globs support regex-like syntax. It's worth spending some time reading about this and getting familiar with it. There's a neat trend of folks rewritten common utilities (cd, cat, find, etc) in rust. Here's a great writeup of improved utilities you can use. You can find my set of tools here. Plugins

Some notes on my plugin configuration:

Here's my list of zsh plugins. It took some extra bindkey config to get substring history search working zsh-autosuggestions caused weird formatting issues when deleting and pasting text (the autocomplete text wouldn't use a different color and I couldn't tell what was actually deleted). Modifying ZSH_AUTOSUGGEST_IGNORE_WIDGETS fixed the issue for me. I tried to get larkery/zsh-histdb working (really neat project) but it doesn't play well with the fzf reverse-i search, which I really love. Hoping to give this another go in a year or so to see if the integration with fzf and other standard tooling is improved. Being able to filter out failed commands from your zsh history search would be neat. zsh-autosuggest and bracketed paste don't play well together. This snippet fixed it for me. fasd is a really neat tool, but I wanted to customize the j shortcut to automatically pick the first result. Here's how I did it. Resources

Some helpful posts and guides I ran into:

Really awesome guide to fancy zsh features & syntax https://reasoniamhere.com/2014/01/11/outrageously-useful-tips-to-master-your-z-shell/ https://remysharp.com/2018/08/23/cli-improved https://github.com/unixorn/awesome-zsh-plugins https://scriptingosx.com/zsh/ https://sourabhbajaj.com/mac-setup/iTerm/zsh.html http://zpalexander.com/switching-to-zsh/ https://chenhuijing.com/blog/bash-to-zsh https://medium.com/rootpath/replacing-bash-with-zs… http://jeromedalbert.com/migrate-from-oh-my-zsh-to-prezto/ https://terminalsare.sexy/#tools-and-plugins https://callstack.com/blog/supercharge-your-terminal-with-zsh/

Continue Reading

2020 Goal Retrospective

Another year, another yearly goal retrospective. This year included a grab bag of curveballs, most notably COVID. Although there was a lot of loss this year, I'm blessed to be able to say this year was really good for me and my family.

Without further ado, here's the retro!

What Worked Not doing the quarterly reviews and focusing on the monthly reviews. In this season of life (young kids) quiet/focused time is precious and it's not possible to spend too much time planning together (or individually) for that matter. Small, specific goals that created a habit or helped figure out a workflow worked well. We should continue to pick key habits and work on them through a focused goal. It's important to have only one or two of these per year to prevent your goals from becoming too boring. The simple habit tracking sheet (gsheet with the number of times per week I did my target list of habits) provided a nice weekly reminder of the habits I want to build. I started this year reconnecting with a group of friends focused on changing a handful of specific habits. It's been a great motivator to 'flip the defaults' on some behaviors at the beginning of the year. I spent a lot of time over the last year being more intentional about my screen time usage. This has paid off: I feel more focused and less distracted than in the past even if it means I'm the "horrible texter" in group chats. It's worth continuing to improve my systems & disciplines around controlling screen time, it pays a handsome dividend. What Didn't Goals that required lots of communication/coordination with my wife and weren't essential to this year, didn't get done. Getting time alone to work on common projects is challenging with young kids. I don't think there is a great solution to this other than being very careful about committing to goals that fall into this category. Goals that weren't impactful to get done this year were hard to prioritize. Be thoughtful about goals that are 'nice to haves', or something that is very exciting/an important long-term goal, but not something that can be tied to real progress this year. If the goal isn't really important to get done by the end of the year, don't include it. For example, one of my goals was completing a list of house projects. Most of these were not essential and I made progress on these without intentionally prioritizing them. I enjoy learning new skills and doing things with my hands, so I'd made certain improvements a priority without any additional willpower. We didn't hold each other accountable for goals that didn't make any progress by default. In our monthly review, we spent time reviewing the month and what we could improve, but not tracking against the goals we committed to. We didn't adjust our goals and revisit some of the things that were just impractical after covid hit. Historically, we've been bad at adjusting goals after setting them. It feels like admitting defeat, which is something I hate doing. I need to get better at just accepting that life is dynamic and the focus of a year could change on a dime. I naively thought we had the parenting thing down. Kids pushed the limits of our parenting skills this year. My wife and I have spent a lot of time in the second half of the year talking, reading books, implementing new ideas, etc relating to our parenting. This took a lot of time and was the right place to put our efforts, but it was not reflected in our goals (either explicitly or by reducing the number of additional goals). I don't expect this year to be too different as our oldest continues to get... well... older and we continue to attempt to figure out how to parent well. What Should Change? Don't include goals that impact us more than a year out. Don't include goals that aren't critical and will partially get completed by default. Think about which goals require dedicated willpower to change behavior or make significant progress and focus on those. Make reviewing our goals and keeping each other accountable to them part of our monthly review. Either have goals tied to parenting or leave lots of margin to include time for parenting over the next year.

Continue Reading

Blocking Ads & Monitoring External Drives with Raspberry Pi

I've written about how I setup my raspberry pi to host time machine backups. I took my pi a bit further and set it up as a local DNS server to block ad tracking systems and, as part of my digital minimalism kick/obsession, to block distracting websites network-wide on a schedule.

Pi-hole: block ads and trackers on your network

Pi-hole is a neat project: it hosts a local DNS server on your Pi which automatically pulls in a blacklist of domains used by advertisers. The interesting side effect is you can control the blacklist programmatically, enabling you to block distracting websites on a schedule. This is perfect for my digital minimalism toolkit.

Pi-hole has an active Discourse forum. I've come to love these project-specific forums instead of everything being centralized on StackOverflow. Really impressed with how simple and well designed in the install process is. Run curl -sSL https://install.pi-hole.net | bash and there's a nice CLI wizard that walks you through the process. By the end, you can You'll need to point your DNS resolution to your pi on your router, but you can manually override your router settings in your internet config in MacOS for testing. After you have DNS resolution setup to point to the Pi, you can access the admin via http://pi.hole/admin Upgrade your pi-hole via pihole -up There's also an interesting project which bundles wireguard (vpn) into a docker image: https://github.com/IAmStoxe/wirehole Automatically blocking distracting websites

Now to automatically block distracting websites! I have a system for aggressively blocking distracting sites on my local machine, but I wanted to extend this network-wide.

First, we'll need two scripts to block and allow websites. Let's call our blocking script block.sh:

#!/bin/bash blockDomains=(facebook.com www.facebook.com pinterest.com www.pinterest.com amazon.com www.amazon.com smile.amazon.com) for domain in ${blockDomains[@]}; do pihole -b $domain done

For the allow.sh just switch the pihole command in the above script to include the -d option:

pihole -b -d $domain

You'll need to chmod +x both allow.sh and block.sh. Put the scripts in ~/Documents/. Test them locally via ./allow.sh.

Now we need to add them to cron. Run crontab -e and add these two entries:

0 21 * * * bash -l -c '/home/pi/Documents/block.sh' | logger -p cron.info 0 6 * * * bash -l -c '/home/pi/Documents/allow.sh' | logger -p cron.info

Next, make the following changes to enable a dedicated cron log file and more verbose cron logging:

# uncomment line with #cron sudo nano /etc/rsyslog.conf # add EXTRA_OPTS='-L 15'. 15 is the *sum* of the logging options that you want to enable # I found this syntax very confusing and it wasn't until I read the manpage that I realized # why my logging levels were not taken into effect. sudo nano /etc/default/cron # restart relevant services sudo service rsyslog restart sudo service cron restart # follow the new log file tail -f /var/log/cron.log What's all this extra stuff around our script?

I wanted to see the stdout of my cron jobs in cron.log. Here's why the extra cruft around {block,allow}.sh enables this.

The bash -l -c is important: it ensures that the pi user's env configuration is used, which ensures the script can find pihole and other commands you might use in the script. Sourcing the user's environment is not recommended for a 'real' production system, but it's ok for our home-based pi project.

By default, the stdout of the script run in your cron definition is not sent to the parent processes stdout. Instead, it's emailed to you (if you don't have email configured on your pi it will land in /var/mail/pi). To me, this is insane, but I imagine this is the result of a decision made long ago and any seasoned sysadmin has this drilled into his memory.

As an aside, it is unfortunate that many ancient decisions made on a whim continue to cause wasted hours and lots of frustration to newcomers. Think of all of the lost time, or people who give up continuing to learn, because of the unneeded barriers to entry in various technologies. Ok, back to the explanation I promised!

In order to avoid having your cron job output sent to mail you need to redirect the output. | logger does this for us and sends the stdout to syslog. the -p cron.info argument sets the facility.level of the log message. Facility is a weird word used for 'process' or 'log category' and is important because it maps the log entry to the cron.log file specified in the rsyslog.conf modification we made earlier. In other words, it sets the facility of the log message so syslog can run it through its internal ruleset engine to determine which file it should go in. man logger has more nitty-gritty details about how this works.

How long will it take for these block/allow changes to take effect?

Since pi-hole uses DNS for the blacklist, the TTL on the DNS entry matters. Luckily, it's very very short (2m) by default. This means that it will take ~2m for websites to be blocked after the scripts above run on the Pi. You can check the local-ttl value by cat /etc/dnsmasq.d/01-pihole.conf. You can also see the TTL value on a specific DNS entry via the first number under then ANSWER SECTION response when running dig google.com.

If you want to test query response times (and the response content!) between your previous DNS server and your pi-hosted DNS server you can specify a DNS server to use: dig @raspberrypi.local facebook.com. However, something funky is going on with the query response times: code>@raspberrypi.local</code takes longer to execute and reports < 3ms query times but code>@192.168.1.2</code definitely executes more quickly but reports longer query times ~40ms. Would be interesting to understand how dig is reporting these numbers under the hood, but I'm not interested enough to keep digging.

Resources:

https://discourse.pi-hole.net/t/second-level-blacklist-triggered-on-a-schedule/23715/17 https://discourse.pi-hole.net/t/change-the-ttl/6903 https://www.raspberrypi.org/forums/viewtopic.php?t=186833 https://raspberrypi.stackexchange.com/questions/3741/where-do-cron-error-message-go https://serverfault.com/questions/137468/better-logging-for-cronjobs-send-cron-output-to-syslog https://www.rsyslog.com/doc/master/concepts/multi_ruleset.html CUPS: host USB printers on your network through your Pi

I have an old, but trusty, black and white brother HL-L2340D printer. I rarely print stuff, but it's helpful to have a simple printer around when I need to.

The wireless connection on this printer never worked right. My router (Eero) doesn't host printers (which is very frustrating). The silver lining is this gave me an excuse to spend time learning printing on Linux. Here are some notes:

CUPS is still the standard linux print management software. I remember CUPS from over a decade ago. Amazing how slowly technology can change sometimes. Install it via: sudo apt-get install cups You need to add pi to the lpadmin group: sudo usermod -a -G lpadmin pi If CUPS is running properly you can view it locally via: https://localhost:631/. You'll need to ignore invalid certificates on chrome. Use your login info for the pi user to login. In my case, the default printer drivers included with CUPS didn't work for me. I needed to install a specific driver: sudo apt-get install printer-driver-brlaser To broadcast the printer on the network you'll need to install sudo apt-get install avahi-utils then avahi-browse -a I already had avahi-daemon installed for the drive hosting stuff I did. This broadcasted the printers across the networks automatically for me. sudo service cups-browsed restart and sudo service cups restart A couple weeks after I set this up, I tried to print something and it only printed every other page. I did some digging and it looks like v4 of the brlaser driver, not v6, is available via the pi's apt-get packages. Easiest solution looked to be to build brlaser v6 from source. There was a great guide online (linked below) that walked through this. It was super easy! However, this didn't fix the issue. After a bit of digging this ended up being a low-level driver issue with printing 'complex' (high resolution?) documents. https://github.com/pdewacht/brlaser/issues/40 There was a fix for this on master, so I built the driver from master: wget https://github.com/pdewacht/brlaser/archive/master.tar.gz && tar xf master.tar.gz && cd brlaser-master && cmake . && make && sudo make install After installing the driver from master, everything worked really well.

Resources:

https://blog.za3k.com/printing-on-the-brother-hl-2270dw-printer-using-a-raspberry-pi/ https://www.openprinting.org/printer/Brother/Brother-HL-L2360D_series https://www.howtogeek.com/169679/how-to-add-a-printer-to-your-raspberry-pi-or-other-linux-computer/ https://www.linuxbabe.com/ubuntu/set-up-cups-print-server-ubuntu-bonjour-ipp-samba-airprint https://support.brother.com/g/b/faqlist.aspx?c=us&lang=en&prod=hll2360dw_us&ftype3=100257 SMART Drive Monitoring & HD Spindown [TODO move to another blog post]

In one of the blog posts or forum threads, they mentioned that hard drives won't spin down automatically on linux. I wanted to dig into this a little bit, here's what I found:

As a quick refresher, run mount to get a list of all devices on your machine. /dev/s* entries at the end are your hard drives. There's an interesting tool that allows you to inspect and set various parameters/functions: sudo apt-get install hdparm -y. The Pi OS seems to have a recent version of this, which is great. However, I've seen a couple of references to hdparam being useless on newer drives which manage more and more of the settings on the drive-level and don't allow any configuration (which makes sense). Get lots of info on a drive: sudo hdparm -I /dev/sda (replace sda1 with your mount reference) It sounds like really old drives don't spin down by themselves, but most drives have spin-down/power management support built in. We shouldn't need to worry about drive spindown. The most supported toolset available seems to be SmartMon: sudo apt-get install smartmontools However, smartmontools is severely out of date (a 2017 version!). You can at least update the drive database using this command sudo wget https://raw.githubusercontent.com/smartmontools/smartmontools/master/smartmontools/drivedb.h -O /var/lib/smartmontools/drivedb/drivedb.h. It's not recommended run the latest ARM version because of nuanced differences in the ARM execution set that could interact badly with the lower level drive commands being used. Run a long test manually: sudo smartctl -t long /dev/sda Inspect lots of info on the drive, including test progress: sudo smartctl --all /dev/sda You can set it up to scan your drives periodically and email you. Uncomment smartd startup here: /etc/default/smartmontools Setup per-drive config in /etc/smartd.conf or use DEVICESCAN to monitor all drives: DEVICESCAN -d removable -n standby -m mike@mikebian.co -M exec /usr/share/smartmontools/smartd-runner. In some cases, DEVICESCAN may not pick up all of the drives, but in my case I was able to verify through the logs that it did in my case. Restart service sudo service smartmontools restart To test your config, add -M test right after DEVICESCAN and sudo service smartd restart. If everything is working, you'll get a message in sudo cat /var/mail/mail (or via email if you have that setup). Setup mail via SMTP TODO You can customize the log destination by adding a facility to the smartd options in /etc/default/smartmontools. It's easier to simply tail -f /var/log/syslog | grep smartd This looks like a neat tool to monitor hard drive temperature: sudo apt-get install hddtemp Can't get your test to complete? The drive may be going to sleep. https://superuser.com/questions/766943/smart-test-never-finishes. If your drive still doesn't complete the test, it may be dying. Setting up mail delivery

Here's how to setup mail delivery from your pi so everything doesn't get stuck in /var/mail:

sudo apt-get install ssmtp -y sudo nano /etc/ssmtp/ssmtp.conf # I have an old dreamhost account with an smtp server setup # here's the config I used to route mail through that smtp server # it's critical that hostname matches the host in dreamhost hostname=yourdreamhostdomain.com mailhub=smtp.dreamhost.com:465 UseTLS=YES # Secure connection (SSL/TLS) FromLineOverride=YES # Force the From: line AuthUser=email@dreamhost.com AuthPass=dreamhostpassword FromLineOverride=YES Debug=YES

You can then send a test email with echo "Testing pi delivery" | mail email@domain.com. Email sent to /var/mail will automatically be routed through this SMTP server.

Resources:

https://www.raspberrypi.org/forums/viewtopic.php?t=188462 https://brismuth.com/scheduling-automated-storage-health-checks-d470b4283e3e https://www.lisenet.com/2014/using-smartctl-smartd-and-hddtemp-on-debian/ https://help.ubuntu.com/community/Smartmontools Some nifty networking tips & tricks

A grab-bag of interesting networking tricks I ran into while working with the pi:

arp -e (or arp -a on macos) to scan the network for active IP addresses. ARP = Address Resolution Protocol and maps IP address and .local domains to mac address. avahi-browse --all --resolve --terminate provides a more detailed view of local network devices. If you are curious about how .local works (as I was), dig into mdns a bit more. You can configure htop to include network IO. Here's how to set the defaults. dns-sd is an interesting tool to explore what services are being broadcasted on your local machine. Not networking related, but lsusb -t lists out everything connected to your USB ports Upgrading Raspberry Pi

Your Pi won't upgrade to the latest system version automatically. Here's how to upgrade:

sudo apt update sudo apt full-upgrade

Continue Reading

Dumping a AWS RDS Database to Your Local Machine

I'm a big Heroku fan. I used it's hosted Redis and Postgres services for my startup and it scaled incredibly well and saved me a ton of time not having to ever worry about devops.

One of things I loved about Heroku was it's CLI API. You could very easily manage infrastructure through a very thoughtful CLI.

For instance, a common process I would run was:

Dump production database to my local Import the dump into my local Postgres Clean & sanitize the data

This process looked something like:

curl -o latest.dump `heroku pg:backups public-url` pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d app latest.dump bundle exec rake app:sanitize

That first line was all you needed to download a production copy of your DB.

I've been playing around with AWS for a side project and replicating a similar process was surprisingly challenging. AWS RDS (Amazon hosted relational databases) has a concept of 'snapshots' which sounds like exactly what you'd want, but all of the instructions I found looked complicated and there wasn't a simple GUI or CDK interface to create one. Very frustrating!

The easiest solution I was able to find is to tunnel a port from your local to the RDS instance through the EC2 instance (or a bastion host, if you have one) connecting to the RDS DB.

Here's what this looks like:

# don't bind to 5432 on your local, you probably have pg running on that port already local_host=localhost:5433 # pull the remote host from your db connection string attached to your app remote_host=domain.hash.us-east-2.rds.amazonaws.com:5432 # you shouldn't be abler to access RDS # proxy connection to RDS from your local via your EC2 box ssh -N -L $local_host:$remote_host ubuntu@domain.com # assumes that `postgres` is your local username # on your local, in another terminal, you'll run this command to dump the entire remote database # you'll need your pg password on hand in order to run this command pg_dump -c -p 5433 -h localhost -U postgres -f ./latest.dump postgres # after that's completed, you can pull the database into your local psql app_dev < ./latest.dump Resources: https://stackoverflow.com/questions/14916899/download-rds-snapshot https://gist.github.com/syafiqfaiz/5273cd41df6f08fdedeb96e12af70e3b https://medium.com/@deepspaceprog/how-to-connect-via-ssh-to-an-amazon-rds-instance-running-postgresql-5e7661cdd37e

Continue Reading

Building a Elixir & Phoenix Application

Learning Elixir

Ever since I ran into Elixir/Phoenix through a couple of popular Hacker News posts I've been interested in tinkering with the language. I have a little idea for an app that I'm just motivated enough to build that Elixir would work for. I've document my learning process below by logging my thoughts as I learned Elixir via a 'learning project'.

What I'm building

Here's what I'd like to build:

Web app which detects the user's location using the built-in location service in the browser The zip code of that location is determined (server or client-side) The zip code is handed off to a server-side process which renders a page with the zip code.

Here's what I'll need to learn:

Elixir programming language Phoenix application framework Managing packages and dependencies Erlang runtime architecture How client-side assets are managed in phoenix How routing in Pheonix works

I'm not going to be worried about deploying the application in this project.

This is going to be fun, let's get started!

Learning Elixir & Pheonix

I've worked with Rails for a while now, so most of the conceptual mapping is going to be from Ruby => Elixir and Rails => Phoenix.

First, let's get a basic Pheonix dev environment up and running: https://hexdocs.pm/phoenix/installation.html Wow: "An Erlang system running over one million (Erlang) processes may run one operating system process". Processes are not OS processes but are instead similar to green threads with much less overhead. Some tooling equivalents: https://thoughtbot.com/blog/elixir-for-rubyists. asdf, exenv, kiex == rbenv. Looks like asdf is the most popular replacement. Reading this through, I can see why rubyists are so angry about the pipe operator (|>). The elixir version is much different (better, actually useful) than the proposed ruby version. It takes the output of a previous function and uses it as the first input to the next function in the chain. "Function declarations support guards and multiple clauses". What does that mean? It sounds like you can define a method multiple times by defining what the argument shape looks like. Instead of a bunch of if conditions at the top of a function to change logic based on inputs, you simply define the function multiple times. Makes control flow easier to reason about. There's some great syntactical sugar for array iteration for document <- documents == documents.each { |document| ... } "I believe Elixir and Ruby are interchangeable for simple web applications with no high-traffic or that don’t require very short response times." This has been my assumption thus far: Elixir is only really helpful when performance (specifically concurrent connections) is a critical component. We will see if this plays out as I learn more. I'd recommend creating an elixir folder and cloning all of the open-source projects I reference below into it. Makes it very easy to grep (I'd recommend ripgrep, which is much better than grep) for various API usage patterns. To install elixir: brew install elixir; elixir -v verifies that we have the minimum required erlang and elixir versions. I ran this check, we are ready to go! mix is a task runner and package manager in one (rake + bundle + bin/*). It uses dot syntax instead of a colon for subcommands: bundle exec rake db:reset => mix ecto.reset When I ran the install command for pheonix it asked for hex. Looks like bundler/rubygems for elixir. https://hex.pm Webpack is used for frontend asset management and isn't tied into Elixir at all (which I really like). Postgres is configured as the default DB. Now can I start running through the Phoenix hello world: https://hexdocs.pm/phoenix/up_and_running.html etco == ActiveRecord, kind-of. Seems a bit more light weight. Time to setup the database! config/dev.exs is the magic file. Looks like a very Rails-like folder structure at first glance. Interesting that they have a self-signed local https setup built in. That was a huge pain in ruby-land. Looks like lib/NAME_web => app/ eex === erb and has ~same templating language https://milligram.io looks like an interesting minimalist bootstrap. This was included in the default landing page. elixir atom == ruby symbol Erlang supports hot code updates: "We didn't need to stop and re-start the server while we made these changes." Very cool. Later on, I learned that this isn't as cool/easy as it sounds. Most folks don't use this unless their applications have very specific requirements. Routing (routes.ex) looks to be very similar to rails. The biggest difference is the ability to define unique middleware stacks ("pipelines") that match against specific URL routes or content-types. Later on, I realized there aren't nearly as many configuration options compared to rails. For example, I don't believe you use regexes to define a URL param constraint. Huh, alias seems to be like include within modules. Nope! Got this one wrong: Looks like it just makes it easier to type in a module reference. Instead of Some.Path.Object, with alias you can just use Object (without specifying the namespace). use is similar to include in ruby. mix phx.server === bundle exec rails server mix deps.get === bundle Plugs seem similar to Rails engines. Nope! Plug is just a middleware stack. Umbrella applications are similar to Rails engines. Dots . instead of double colons :: for nested modules: MyApp.TheModule == MyApp::TheModule Huh, never ran into the HTTP HEAD method before https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD Examples seem to indicate that router pipelines should be used for before_filter type of logic. Looks like a plug can be a full-blown module, or simply a function on the controller that's called before the action starts executing. https://stackoverflow.com/questions/30958446/rails-before-filter-equivalent-in-phoenix You can setup after_action-like functionality, but it's not as intuitive: https://elixirforum.com/t/phoenix-controllers-post-action-plugs/18267 fn is a lambda function in Elixir. Doesn't look like there are multiple ways to do lambdas. Yay! I hated the many ways of defining anonymous functions in ruby that all worked slightly differently (procs, blocks, and lambdas). There is shorthand syntax fn(arg) -> arg.something end == &(&1.something) There's defp, def, and defmodule. What's the difference? After a bit of digging, these are core elements to elixir which slightly change how the methods are defined. defp is a private method, for instance. Ahh func/2 references the implementation of func with two arguments. When referencing a function you must specify the number of arguments using this syntax. use Phoenix.Endpoint references a macro, how exactly do macros work? Macros are Elixir's metaprogramming primitive. That's all for now, I'll read more later. I ended up not doing any metaprogramming in my application but learned it a bit about it. It sounds like you essentially specify code you want to inject into a module by quoteing it within the defmacro __using__ function in your module. This __using__ function is automagically called when you use the module. This enables you to dynamically write the elixir code you want to include (you can think of a quote as dynamically eval'd code). Live reload for the front and backend is installed by default and "just works" when running a development server. Yay! Hated all of the config in rails around this. "It is also possible for an application to have multiple endpoints, each with its own supervision tree" sounds very cool. I'm guessing this allows for multiple applications to be developed within one codebase but to run as essentially separate running processes? Something to investigate in another project. Interesting that the SSL config is passed directly to the core phoenix endpoint configuration. I wonder if there is something like unicorn/puma in the mix? It looks like there is, Cowboy is the unicorn/puma equivalent. Ecto is not bundled in the Phoenix framework. It's a separate project. Looks like phoenix favors a layered vs all-in-one approach, but is opinionated about what packages which are installed by default (which I like). I don't fully understand this yet, but it looks like there is an in-memory key-value store built into OTP, which is the elixir runtime (i.e. erlang). In other words, something like Redis is built-in. What are the trade-offs here? Why use this over Redis or another key/value store? Because you can define multiple variations of a method, things like action_fallback is possible. Define error handling farther up the chain and just think about the happy path in the content of the method you are writing. Neat. "EEx is the default template system in Phoenix...It is actually part of Elixir itself" Great, so this isn't something specific to Phoenix. This made something click for me: "pattern matching is strong typing" https://news.ycombinator.com/item?id=18842123 It seems as though one of the goals behind pattern matching + function definitions is to eliminate nested conditionals. Elixir (and probably functional programming in general) seems to favor "flat" logic: I'm not seeing many nested if statements anywhere. As I learned later, if statements are generally discouraged and hard to use as they have their own scope (you can't modify variables in the outer scope at all). ^ 'pins' a variable. const in node, but slightly different because of this "matching not assignment" concept (which I don't fully get yet). This is used a lot in Ecto queries, but I'm not sure why. Gen prefix stands for Generic NOT Generate as I thought. i.e. GenServer == Generic Server. I still don't understand this "Let it crash" philosophy. Like, if a sub-routine of some sort fails, it would corrupt the response of any downstream logic. I can see the benefits of this for some sort of async map-reduce process, but not a standard web stack. What am I missing? After many rabbit holes, I'm ready to tackle my initial goal! I'm having a blast, it all seems very well designed: I'm getting the same feeling as when I first started learning Rails via Spree Commerce year ago. What I'm missing from Ruby

Overall, I found the built-in Elixir tooling to be top-notch. There didn't seem to be too many obvious gaps and things generally "just worked". However, there's some tooling from the ruby ecosystem that I was missing as I went along.

Automatically open up a REPL when an exception is thrown. In ruby, this is done via pry-rescue. Super helpful for quickly diving into the exact context where the error occurs. In Phoenix, it would be amazing if the debugging plug (which displays a page when an exception is thrown) displays the variables bound in a specific scope so I can reproduce & fix errors quickly. It would be even better if a REPL could be opened and interacted with on the exception page. better-errors does this in ruby. Given that all code in Elixir is functional, simply knowing the local variables in a specific scope would be enough to reproduce most errors and would make for a very quick debugging loop. iex -S mix phx.server feels weird. It would feel a bit nicer if there was a mix phx.console which setup IEx for you. The Allow? [Yn] prompt is annoying when I'm debugging a piece of code. It would be great if you could auto-accept require IEx; IEx.pry requests. In a debugging session, I couldn't figure out how to navigate up and down the call stack. Is there something like pry-nav available? Scan dependencies for security issues. In ruby, this is done via bundler-audit. I couldn't find a VS Code extension with Phoenix snippets. Built-in Structured Logging. In my experience, using structured logs is incredibly helpful in effectively debugging non-trival production systems. I've always found it frustrating that it's not built-in to the language (I built one for ruby). I think it would be amazing if this was provided as an optional feature in Elixir's logger: Logger.info "something happened", user: user.id => something happened user=1 It doesn't seem possible to run a mix task in production when using Elixir releases. There are many scenarios where you'd want to run a misc task on production data (a report, migration, etc). In Rails-land, this has been a great tool to have to solve a myriad of operational problems when running a large-ish application. Ability to add multiple owners/authors to a hex package. This makes it challenging to hand off ownership of a package when the original creator doesn't have the time to maintain it anymore. Coming from Rails, phoenix_html feels very limited. There are many convenience methods I'm used to in Rails that I wasn't excited about re-implementing. In ruby, if you are working on improvements to a gem (package) you can locally override the dependency using bundle config local.gem_name ~/the_gem_path. This is a nice feature for quickly debugging packages. There's not a built-in way to do this.

I posted about this on the Elixir forums and got helpful workarounds along with confirmations about missing functionality.

Initial impressions

I enjoyed learning Elixir! It's a well designed language with great tooling and a very supportive community. However, it still feels too early to use for a traditional SaaS product.

Although there are packages for most needs, they just don't have as many users as the ruby/javascript ecosystem and there's a lot of work you'll need to do to get any given package working for you. Phoenix is great, but it's nowhere close to rails in terms of feature coverage and you'll find yourself having to solve problems the Rails community has already perfected over the years. The deployment story is really poor and is not natively supported on Lambda, Heroku, etc.

There are specific use-cases where Elixir is a great choice: applications that have high concurrency and/or performance demands (i.e. chat, real-time, etc) and IoT/embedded systems (via nerves) are both situations where Elixir will shine. The Elixir language has been more carefully curated compared to ruby and continues to improve at a great velocity. It's cool to see the creator of Elixir very active in the forums an actively listening to the users and incorporating feedback. It very much reminds me of the early days of Rails.

This is all to say, in my experience, Ruby + Rails is still the fastest way to build web applications that don't have intense concurrency/performance requirements on day one. The ecosystem, opinionated defaults, and hardened abstractions battle-tested by large companies (Shopify, GitHub, Stripe) are just too good. The dynamic nature of the language allows for tooling (better-errors, pry-rescue, byebug, etc) that materially increases development velocity.

Other Learnings Community matters

When I first started learning how to program, Kirupa (which still exists, amazingly) was an incredible resource. Random people from the internet answered by basic programming questions. All of my initial freelancing work came from the job board. The Flash/Actionscript tutorials on the site were incredibly helpful. It was a relatively small tightly-nit community that was ready to help.

I've feel like we've lost that with StackOverflow and googling for random blog posts.

The ElixirForum.com community is awesome and has that same kind, tight-nit, open-to-beginners feel that the forums of the 90s had. I was impressed and enjoyed participating in the community.

Confirmation bias is very real

I already liked Elixir before I dug into it. It looked cool, felt hot, etc. I was looking for reasons to like it as I did this example project.

It was interesting to compare this to my experience with node. I already didn't like Javascript as a whole and was ready to find reasons I didn't like node.

I found them, but would I have found just as many frustrating aspects of Elixir if I didn't have a pre-existing positive bias towards Elixir?

Managing your psychology and biases is hard, but something to be aware of in any project.

Functional programming isn't complicated

"Functional programming" is an overloaded concept. Languages are touted as "functional programming languages", there are dedicated FP conferences, and fancy terms (like "monads") all make it harder for an outsider to understand what's going on.

I want to write up a deep-dive on functional programming at some point, but getting started with this style of programming is very easy:

You can program in a functional style in any language. Don't store state (or store as little as possible) in objects. This forces you to declare all inputs needed for the function as arguments, instead of sourcing variables from an instance or class variable. Writing functions that don't depend on external state are deterministic/idempotent by default. In other words, running the function against the same set of inputs yields the same results.

Boom! You are programming in a functional style. There's more to it, but that's the core.

Per-language folders for easy code search

Having a set of repositories is very helpful is understanding how various libraries are used in production. I've found it super helpful to have a folder with any great open source applications I can find in the language I'm learning. This makes it very easy to grep for various keywords or function names to quickly understand patterns and real-world usage.

For example:

cd ~/Projects mkdir elixir cd elixir git clone https://github.com/thechangelog/changelog.com # ripgrep is a faster and much easier to use version of grep rg -F 'Repo.'

Along these lines, grep.app is a great tool for quickly searching a subset of GitHub repositories (can't wait until GitHub fixes their code search).

Open questions

There's a bunch of concepts I didn't get a chance to look into. Here's some of the open questions I'd love to tackle via another learning project:

Processes/GenServer/GenStage. Although I did work with packages that create their own processes, I didn't work with Gen{Server,Stage} from scratch. Macros / metaprogramming. Testing. Ecto/ORM. Callbacks (what does @behaviour do?). Clusters/Nodes (connecting multiple erlang VMs together to load balance) Functional programming concepts. These were referenced around the edges but I never dug into them in a deep way. Recommended Elixir style guide. I know there's a built-in formatter/linter, but I wonder if there's a community-driven opinionated style guide. Background jobs. Deployment. VS Code/language server/development environment optimizations. What's up with @spec? Is there typing coming to elixir? Supervisor trees. Built-in ETS tables. Looks like a built-in key-value store similar to redis. Resources for learning Elixir & Phoenix General https://www.sihui.io/first-impression-of-elixir/ http://crevalle.io/mistakes-rails-developers-make-in-phoenix-pt-1-background-jobs.html https://dockyard.com/blog/2016/05/02/phoenix-tips-and-tricks http://blog.plataformatec.com.br/2018/04/elixir-processes-and-this-thing-called-otp/ https://www.scopelift.co/blog/2018/3/1/phoenix-on-heroku-our-experience-getting-coinrecapio-deployed https://davidlaprade.github.io/inserting-breakpoints-in-elixir https://github.com/h4cc/awesome-elixir https://github.com/thoughtbot/constable https://thoughtbot.com/services/elixir-phoenix http://digitalfreepen.com/2017/08/16/elixir-in-depth-notes.html https://elixirforum.com/t/elixir-blog-posts/150 https://howistart.org/posts/elixir/1/index.html https://til.hashrocket.com/elixir https://extips.blackode.in https://elixirschool.com/en/ https://gist.github.com/raviwu/2e128666ef7e7325c94753097f48c500 Specific Topics Elixir with a Rubyist: http://joaomdmoura.com/articles/learn-elixir-with-a-rubyist-episode-i Debugging: https://www.youtube.com/watch?v=w4xMarVUZQ4 String interpolation: https://thepugautomatic.com/2016/01/elixir-string-interpolation-for-the-rubyist/ Regex: https://thepugautomatic.com/2016/01/pattern-matching-complex-strings/ Opinions https://journal.dedasys.com/2015/04/23/elixir-vs-erlang-a-question-of-momentum/ https://news.ycombinator.com/item?id=18838115 https://github.com/dwyl/learn-elixir/issues/102 https://news.ycombinator.com/item?id=20357055 http://underjord.io/why-am-i-interested-in-elixir.html https://adrian-philipp.com/post/why-elixir-has-great-potential Videos https://www.youtube.com/channel/UC0l2QTnO1P2iph-86HHilMQ https://www.youtube.com/channel/UCIYiFWyuEytDzyju6uXW40Q https://www.youtube.com/channel/UCKrD_GYN3iDpG_uMmADPzJQ https://www.youtube.com/channel/UC47eUBNO8KBH_V8AfowOWOw https://www.youtube.com/watch?v=srQt1NAHYC0 https://www.youtube.com/watch?v=JvBT4XBdoUE https://www.youtube.com/watch?v=B4rOG9Bc65Q Example Applications

Clone these for easy local grepping.

https://github.com/thechangelog/changelog.com - actively managed https://github.com/rizafahmi/elixirjobs - dead project https://github.com/poanetwork/blockscout - active https://github.com/aviacommerce/avia - looks like a zombie project. No commits in 2+ months. https://github.com/edgurgel/httparrot https://github.com/hashrocket/tilex https://github.com/yodiaditya/gps-monitoring https://github.com/ComeBike/come.bike https://github.com/getsentry/sentry-elixir

Continue Reading

Using Ansible to Deploy Elixir Applications on Dokku

For me, the best (and most fun!) way to learn is to find a problem with a new set of tools you want to learn. I've documented my process of learning Ansible below, I hope it's interesting to others!

Motivation

I built an application with Elixir and Phoenix and deployed it using Gigalixir. Gigalixir worked well, but after a couple of weeks the site shut down due to a lack of updates (I was on the free tier). Since this project is strictly for learning, I figured it would be fun to learn Ansible and save a couple bucks by signing up for a free VPS service.

I initially chose Vultr because they offered $50 of free credit towards a $3.50/month VPS, which should be more than enough for a year. This ended up now working out and I switched to AWS (detailed below).

I have some experience with Ansible-like technologies. Long ago, I used Puppet to configure and manage configuration on a single VPS which hosted a Spree Commerce application. It also had a Solr and MySQL server (this was before managed services were a thing and you had to host things yourself). It was interesting to set up, but a pain to manage. Making changes was always scary and created surprising and hard-to-debug errors. Puppet has a unique DSL and both the client and the server have to have Puppet installed for the configuration to work properly. It felt better than configuring Apache & Ubuntu by hand in the PHP days, but it wasn't that much better.

I keep hearing about Ansible, let's learn it and see how things have improved!

What I'm building

Here's what I'd like to build:

An Ansible configuration that will bootstrap a bare VPS with Dokku. Setup the Dokku application with an SSL certificate using Lets Encrypt. Elixir + Phoenix running using the community buildpacks. Ideally, I don't want to do any manual configuration on the VPS. I want my entire production setup to be built via Ansible. Learning Ansible

Here's my "liveblog" of my thinking and learnings as I built my ansible config:

Awhile back, I used Dokku to manage ~5 different microservices on a single (small) AWS VPS (via Lightsail). It worked amazingly well and was very stable. Before I move forward with Dokku, I took a look at the project on GitHub and it's still (very) active, which is amazing! Let's use that to manage our Elixir deployment. Ansible is a Python-based replacement for puppet/chef. Looks like it consumes yml files and configures servers via ssh. You only need Ansible installed on the "controller machine". This sounds like I can just install it on my laptop and avoid having to install anything on the target/remote server. This is a huge improvement over Chef/Puppet. MacOS install: sudo easy_install pip && sudo pip install ansible && ansible --version A brew command I ran in the meantime ended up breaking my easy_install version. There was a library conflict. I ended up installing via brew instead and this fixed the issue. Setup a ansible.cfg in your project directory. You'll also need an inventory file to specify where your servers are. You may need to add your SSH key to the VPS you spun up ssh-copy-id -i ~/.ssh/id_rsa.pub root@123.123.123.123. Alternatively you can specify a SSH key in your inventory. Put ansible_ssh_private_key_file=~/.ssh/yourkey.pem after your IP address. I have ansible all -m ping working. Now to try to whip up a Ansible playbook that will install Dokku. Playbooks are a separate yml file that describes how you want to setup the server. Let's call ours playbook.yml. We'll run it using ansible-playbook playbook.yml. An Ansible "role" is a bundle of tasks. You can then layer on additional tasks on top of the role. I'm guessing you can also run multiple roles (confirmed this later on). My main goal is to use https://github.com/dokku/ansible-dokku to bootstrap a server. I cloned this to my local to more easily poke around at the code. It look like the variable defaults are specified in defaults/main.yml At least in this repo, each task contained in the ansible-dokku repo is a separate py file which defines an interface to Ansible using a AnsibleModule A "lookup plugin" can pull data from a URL, file, etc for a variable. This will be handy for setting up SSH keys, etc. Here's an example: "{{lookup('file', '~/.ssh/id_rsa.pub')}}" Looks like roles don't auto install when you run Ansible. "Galaxy" is the package registry for roles. You need to run a separate command to install packages. Best way to manage roles is to setup a requirements.yml and then run ansible-galaxy install -r requirements.yml. Docs are straightforward: https://galaxy.ansible.com/docs/using/installing.html Think of "modules" as a library. An abstraction around some common system task so you can call it via yml. A module can contain roles and tasks. You'll see name everywhere in the yml files. This is optional and is only metadata used for logging & debugging. {{ }} are used for variable substitution. Does not need to be inside a string. You can call lookups from inside the brackets. I'm not a yml expert, but this seems like a custom layer on top of the core yml spec. become: true at the top of your playbook tells Ansible to use sudo for everything. Think of it as root: true. Each task has a default state. You can override the state by adding state=thestate to your task options. Each task defines a method to extract the current state from the system Ansible is operated on. Here's an example. State is mostly extracted by reading configuration files or running a command to read the status of various systems (it's not as magical as you might expect). Ansible has a vault feature which can encrypt an entire file or an inline variable. Rails introduced something similar where it would encrypt your production secrets into a local file so you could edit/manage them all in a single place. You can also inline encrypt a string using ansible-vault encrypt_string the_thing_to_encrypt --name the_yml_key. You can then copy/paste the resulting string into a var. Add vault_password_file = ./vault_password to your ansible.cfg and hide the file via .gitignore. This eliminates the need to enter the password each time you deploy via Ansible. You can then store the password in 1Password for safekeeping. Encrypted variables need to be stored in vars. I wanted to use encrypted variables for secret definitions passed to dokku config, but I couldn't use the encrypted string directly in the ENV var config. In vars define your secret app_database_url: !vault |..., then reference the secret in your ENV config DATABASE_URL: "{{ app_database_url }}". Use -vvvv as a CLI option to enable verbose logging. I ran into an issue where a subcommand was hanging waiting around a reply from stdin. However, verbose logging didn't help me here. I'm guessing the subprocess called didn't redirect output to the parent stdout/stderr so I couldn't see any helpful debugging output. This issue ended up being a bit interesting. ansible-dokku used the python3 subprocess module to run dokku commands on the machine. check_call was used, which doesn't redirect stdin or stdout but subprocess data didn't pipe it's way to the ansible stdout or stdin even after I switched to using run. I'm guessing there's a layer of abstraction in the ansible library which overrides all process pipes and prevents output from making its way to the user without a specific flag passed to AnsibleModule. Alright! I finally have my playbook running properly. Note that most ansible roles seem to work with Ubuntu, but not CentOS which was the default on the VPS provider I was testing out (Vultr). To modify a role that you are using, clone the repo, remove the repo from ~/.ansible/roles and then symlink the directory you removed from the directory. This will allow you to edit role code locally and test it on a live server (obviously, a horrible idea for a real product, fine for a side project). If you see a plain killed message in your deployment log, it's probably because the server is running out of memory. Let's add some swap to fix this! There's got to be a role for adding swap memory to a server. There is: geerlingguy.swap. Added that to requirements.yml and added configuration options to my vars and boom, it works! Nice. I tried to add my own task dokku_lets_encrypt to the dokku-ansible role, but I ran into strange permission issues. Also, the development loop was pretty poor: make a change on my local and rerun the change on the server. Not fun. I ended up just giving up and running the letsencrypt setup manually on the server, so I failed in my goal to fully automate the server configuration. If you just want to run a single task use the --tags option https://stackoverflow.com/questions/23945201/how-to-run-only-one-task-in-ansible-playbook.

Here's the template I based my config off of. Here's the playbook configuration I ended up with, which demonstrates how to configure specific dokku module versions and uses encrypted strings:

--- - hosts: all become: true roles: - dokku_bot.ansible_dokku - geerlingguy.swap vars: swap_file_size_mb: '2048' dokku_version: 0.21.4 herokuish_version: 0.5.14 plugn_version: 0.5.0 sshcommand_version: 0.11.0 dokku_users: - name: mbianco username: mbianco ssh_key: "{{lookup('file', '~/.ssh/id_rsa.pub')}}" dokku_plugins: - name: clone url: https://github.com/crisward/dokku-clone.git - name: letsencrypt url: https://github.com/dokku/dokku-letsencrypt.git tasks: - name: create app dokku_app: # change this name in your template! app: &appname the_app - name: environment configuration dokku_config: app: *appname config: MIX_ENV: prod DATABASE_URL: "{{ app_database_url }}" SECRET_KEY_BASE: "{{ app_secret_key_base }}" DOKKU_LETSENCRYPT_EMAIL: hello@domain.com # specify port so `domains` can setup the port mapping properly PORT: "5000" vars: # encrypted variables need to be in `vars` and then pulled into `config` via app_database_url: !vault | $ANSIBLE_VAULT;1.1;AES256 abc123 app_secret_key_base: !vault | $ANSIBLE_VAULT;1.1;AES256 abc123 - name: add domain dokku_domains: app: *appname domains: - domain.com - www.domain.com - name: add domain dokku_domains: app: *appname global: True domains: [] # this command doesn't work via ansible, but always works when run locally... # https://github.com/dokku/ansible-dokku/pull/49 # - name: letsencrypt # dokku_lets_encrypt: # app: *appname # you'll need to `git push` once this is all setup

Here are key commands to manage your servers:

# can we reach our inventory? ansible all -m ping # encrypt secret keys in playbook ansible-vault encrypt_string 'the_value' --name the_key # install dependencies ansible-galaxy install -r requirements.yml --force-with-deps --force # run playbook ansible-playbook playbook.yml Deploying Elixir & Phoenix on Dokku

I've used dokku for projects in the past, and blogged about some of the edge cases I ran into. It took some fighting to get Elixir + Phoenix running on the Dokku side of things:

I needed to create a Procfile with an elixir web worker definition web: elixir --sname server -S mix phx.server. Things aren't as out of the box compared with rails. I think this is mostly because there's two separate buildpacks required that aren't officially maintained. Dokku plugins are just git repos. There's no registry. Best place to find plugins is the dokku documentation. There's an install command that pulls them from GitHub. The dokku-ansible role handles many common plugins, but you need to add them to your vars => dokku_plugins config to get them to autoinstall. dokku clone needs you to add the generated key to GitHub. ssh dokku@45.77.156.135 clone:key to get the public key, then add it as a deploy key in the GitHub repo. It may not be worth it to set this up. Easier to just git-push deploy manually. Dokku (apparently, just like Heroku) allows you set a .buildpacks file in the root directory. Just add a list of git repo URLs. Use a # to specify an exact git repo SHA to use. If you keep messing around with deploys you may exit the shell while there is a lock on the deploy. dokku apps:unlock to the rescue. This has never happened to me on Heroku, although I have always been much more careful with my production applications. Curious how Heroku handles this. If the build is failing, instead of continuing to run builds via git push you can find the failing build container and jump in. docker ps -a | grep build. The second ID, which is either a short SHA or a string (dokku/yourapp:latest), is what you want to plug into docker run -ti 077581956a92 /bin/bash. From there you can experiment and tinker with the build. Most buildpacks modify the PATH to point to executables like npm, node, etc that are pulled locally for bundling web assets. Helpful for debugging issues with buildpacks. If you want to jump into a running container: docker exec -it CONTAINER_ID /bin/bash. herokish (the set of scripts which creates the heroku experience on dokku) builds things in the /tmp/build directory. https://github.com/gliderlabs/herokuish/blob/master/include/herokuish.bash and https://github.com/gliderlabs/herokuish#paths It looks like the cache dir is actually stored in /home/APPNAME/cache. This is linked to the build container during a git-push. I ran into issues with node_modules cache that required some manually debugging. dokku run does not enter into the same container that's running your app. Use dokku enter app_name process_type the_command for that. If you are generating a sitemap, using dokku run won't work because it doesn't persist the files to the same container that is serving your static assets. Using S3 for static asset hosting would eliminate this problem.

Here's what my buildpack config looks like:

# .buildpacks https://github.com/HashNuke/heroku-buildpack-elixir.git#1251439227711cf28bbfbafc101f9c9ff7f9345a https://github.com/gjaldon/heroku-buildpack-phoenix-static.git#b44e094c9da48483af5e221ff11f954a8b85479b # pheonix_static_buildpack.config # the pheonix buildpack does not specify recent versions of node & npm, which causes webpack issues node_version=12.14.1 npm_version=6.14.4 # elixir_buildpack.config elixir_version=1.10.4 # https://erlang.org/download/otp_versions_tree.html erlang_version=22.3.4 Configuring AWS EC2 using Ansible

Vultr's free credits ended up expiring after a couple of months (as opposed to a year). I wasn't thrilled with the service and was curious to learn more about AWS by using additional services in the future, so I decided to move the server over to AWS:

Looks like amazon linux isn't supported on Ansible. Use the ubuntu image instead. https://github.com/geerlingguy/ansible-role-docker/issues/141 "Amazon Linux" root user is ec2-user, ubuntu's root is ubuntu. Amazon Linux is not compatible with many ansible packages, so use ubuntu. become: true (sudo mode) is required on Amazon. The local disk space of EC2 instances is tiny by default. You can expand the local disk space, which is a EBS instance, but navigating to the elastic block store and adjusting the instance. You'll probably need to restart shutdown -h now I forgot about this: ports for http and https not exposed by default. If you run through the one-click EC2 wizard, only ssh will be exposed. Use the longer wizard to generate a "security group" exposing the proper ports. You'll also want to setup an elastic IP. This is an IP that you can assign, and then reassign, to another EC2 instance. I've always been annoyed by AWS. It's incredibly powerful, but hard to understand. You have to think of every little configuration option as a separate object with state that needs to be configured just right. Designing infra with code via https://github.com/aws/aws-cdk makes a ton of sense. I bet once you load the entire AWS data model in your head things make a lot more sense. Learning Resources Ansible https://medium.com/@mitesh_shamra/introduction-to-ansible-e5b56ee76b8c https://blog.morizyun.com/blog/dokku-isntall-vultr-pass-mini-heroku/index.html https://www.digitalocean.com/community/tutorials/configuration-management-101-writing-ansible-playbooks https://lebenplusplus.de/2017/06/09/how-secure-are-ansible-vaults/ https://medium.com/@burakkarakan/what-exactly-is-docker-1dd62e1fde38 https://opensource.com/article/16/12/devops-security-ansible-vault Dokku https://www.petekeen.net/introduction-to-heroku-buildpacks https://github.com/jeffrafter/howto/blob/master/unformatted/elixir-phoenix-dokku.md https://www.petekeen.net/introduction-to-heroku-buildpacks https://dokku.github.io/general/automating-dokku-setup Yaml

Interestingly, there's not great canonical documentation for yaml. There's a spec, but not docs on the official homepage.

http://lzone.de/cheat-sheet/YAML https://github.com/darvid/trine/wiki/YAML-Primer

Continue Reading

Running Tests Against Multiple Ruby Versions Using CircleCI

I've been a long-term maintainer of the NetSuite ruby gem. Part of maintaining any library is automated tests against multiple versions of various dependencies. Most of the time, this is limited to the language version, but can include other dependencies as well.

Recently my build config stopped working as CircleCI upgraded to V2 of their infrastructure. I found it challenging to find an example CircleCI V2 config with the following characteristics:

No Gemfile.lock and therefore no caching of gems. When you are testing across ruby versions you can't use a single Gemfile.lock. No rails, no databases, just plain ruby

Here's an heavily documented CircleCI config that tests multiple ruby versions:

version: 2.1 orbs: # orbs are basically bundles of pre-written build scripts that work for common cases # https://github.com/CircleCI-Public/ruby-orb ruby: circleci/ruby@1.1 jobs: # skipping build step because Gemfile.lock is not included in the source # this makes the bundler caching step a noop test: parameters: ruby-version: type: string docker: - image: cimg/ruby:<< parameters.ruby-version >> steps: - checkout - ruby/install-deps: bundler-version: '1.17.2' with-cache: false - ruby/rspec-test # strangely, there seems to be very little documentation about exactly how martix builds work. # By defining a param inside your job definition, Circle CI will automatically spawn a job for # unique param value passed via `matrix`. Neat! # https://circleci.com/blog/circleci-matrix-jobs/ workflows: build_and_test: jobs: - test: matrix: parameters: # https://github.com/CircleCI-Public/cimg-ruby # only supports the last three ruby versions ruby-version: ["2.5", "2.6", "2.7"]

Continue Reading

Time Machine Backups with a Raspberry Pi and External Drives

As I was reviewing my backup strategy, I realized I hadn't completed a Time Machine backup on my machines in a long time. Plugging in the drive was just enough friction to forget doing it completely.

The Airport Express has a USB port to plug hard drives, printers, etc into. These devices would be magically broadcasted to the network. It was awesome, and then Apple killed the device. The Eero I upgraded to is great, but the USB port is useless.

But, there's silver lining! I've been looking for a good excuse to buy a Raspberry Pi and mounting external hard drives on the network fit the bill! $35 for a tiny computer more powerful that anything I had growing up and more powerful than a $5 DigitalOcean or AWS VPS. What's not to like?

Purchasing the Hardware Raspberry PI 4 2GB. $45. I didn't end up using the USB-C => micro USB connector and the eBook was useless. HDMI connector was helpful. Case, fan, and power supply. $12. The 5V 3A power supply required isn't common, so you'll most likely need to buy one. Having a case is really nice. You'll also need a micro SD card, but I had an extra 16GB card.

So not exactly the $35 sticker price that is advertised, but still cheap.

Setting up Raspberry Pi for Remote VNC & SSH Access

My goal was to run the Pi headless. Here's how I got the Pi setup for VNC access over the network that works across reboots:

Download http://downloads.raspberrypi.org/NOOBS_latest. Unzip and put it on the SD card. Make sure the SD card is FAT formatted. Make sure you don't put the unzipped folder on the root directory, but rather the contents of the unzipped folder. Startup the Pi. You'll want a monitor connected via HDMI and a (wired) keyboard to complete the setup process. You don't need a mouse. Setup VNC & ssh. Open up a terminal and run sudo raspi-config. Navigate to "Interfacing Options", enable VNC & SSH. Set boot options to desktop for easy VNC usage. Here's more info You also want to set the default resolution via raspi-config or VNC won't work when you reboot without a monitor. On your mac brew cask install vnc-viewer. Username: pi, password is what you used during the on-screen setup. You should be able to manage the device right from your mac.

At this point, you'll have access to the PI without a keyboard and mouse. Let's setup the Pi to serve up the hard drives over the network!

Setting Up External Hard Drives as Network Attached Storage (NAS)

Here's a couple articles I found that were helpful:

This one is the most recent and complete: https://gregology.net/2018/09/raspberry-pi-time-machine/ https://magpi.raspberrypi.org/articles/build-a-raspberry-pi-nas https://mudge.name/2019/11/12/using-a-raspberry-pi-for-time-machine/ https://gregology.net/2018/09/raspberry-pi-time-machine/ https://github.com/mr-bt/raspberrypi-timemachine

None of the articles seemed to completely match by setup. Here's what I wanted to setup:

I have two external drives. I wanted to use one as a networked time machine drive and the other as general storage. One of the drives had a power supply and the other did not.

Here's how I ended up serving my two hard drives on the network:

Install the packages we'll need: sudo apt-get --assume-yes install netatalk A quick note on HFS+ formatted drives: I ended up corrupting the drives on HFS+ mode, most likely because I aggressively turned the power on/off without unmounting the drives. I'd recommend against using HFS+ formatted drives and instead format to the linux-native Ext4. I've documented this below. Run netatalk -v to make sure you have a recent version and get the location to the config file. Latest version is indicated here: http://netatalk.sourceforge.net sudo nano /etc/netatalk/afp.conf to edit the config file. This is the location for version 3.1. Pull the location of the config from the output of the previous netatalk command we ran if you run into issues finding this file. The instructions inlined in the config file are pretty straightforward. In Global add mimic model = TimeCapsule6,106. This broadcasts the time machine drive to look like a 'real' time machine device. Here's a list of other options you can use. Neat! https://wiki.archlinux.org/index.php/Netatalk You'll also want to edit sudo nano /etc/nsswitch.conf and append mdns4 mdns to the line with dns. This broadcasts the drive availability on the network. Get a list of all services running on your Pi with sudo service --status-all sudo shutdown -r now or sudo reboot to restart the system from the command line Change your AFP config? sudo service netatalk restart I did find the MacOS finder was pretty glitchy when I restarted services on the Pi. I ended up force quitting the finder a couple times to pick up new drive configurations. sudo chown -R pi:pi /media/pi/MikeExternalStorage to fix strange permission issues when accessing the drive. This may have had to do with attempting to use HFS+ formatting at first, so you most likely don't need to do this.

Here's the final /etc/nsswitch.conf:

# /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: files group: files shadow: files gshadow: files hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis

And /etc/netatalk/afp.conf:

; ; Netatalk 3.x configuration file ; [Global] mimic model = TimeCapsule6,106 ; [Homes] ; basedir regex = /xxxx [ExternalStorage] path = /media/pi/ExternalStorage [TimeMachine] path = /media/pi/TimeMachine time machine = yes A Warning on Filesystem Types

I had everything working, and then I accidentally restarted the Pi and everything was mounted as a read-only drive. I ran mount and saw type hfsplus (ro... in the info line. RO = read only.

After some googling I found that this seemed to be caused by the HFS+ filesystem (i.e. macos's default) that I attempted to support by installing hfsprogs hfsplus hfsutils. Don't try to host a HFS drive via the Pi. The drivers are not great and I believe this is why I ran intro trouble here.

Here's what I tried to fix the problem:

After some googling I found: sudo fsck.hfsplus -f /dev/sda1. This didn't do anything for me. I tried https://askubuntu.com/questions/333287/how-to-fix-external-hard-disk-read-only. This didn't seem to work for me because I had partitions lsblk, blkid, sudo fdisk -l, ls -l /dev/disk/by-uuid/ are useful tools for inspecting what disks/devices are mounted. sudo cp fstab fstab.bak, edit fstab with nano, UUID=1-1-1-1 /media/pi/ExternalStorage hfsplus force,rw. No quotes around uuid. Tried sudo umount /dev/sda2 && sudo mount /dev/sda2. No luck. Determine how large a folder is: sudo du -sh Tried disabling sudo service netatalk stop && sudo service avahi-daemon stop and restarting the computer. No dice.

Uh oh. Not good.

I plugged the drive into my MacOS computer and told me the drive was corrupted. I tried to repair the disk and it gave me esoteric error. I think accidentally turning off the power may have corrupted the disk. HFS+ isn't supported natively and seems to be prone to corruption issues if drives are not unmounted properly. Ext4 is the recommended file system format.

Some reference articles:

https://askubuntu.com/questions/997279/how-to-make-hfs-external-drive-read-write-journaling-already-disabled-still-n https://www.raspberrypi.org/forums/viewtopic.php?t=109157 https://raspberrypi.stackexchange.com/questions/30151/unplugged-hfs-usb-drive-from-rpi-is-corrupted/41643

It's surprising to me, that in 2020, file system formats still matter across operating systems. I feel like I'm transported back to the 90s. Why isn't this a solved problem yet?

Here's what I did to fix the issue:

Luckily, although I can't write to my drives, I can read from them and was able duplicate the data other devices. I pulled the data off to various drives and computers using rsync. It was a huge pain: I needed to spread the data across a couple of different devices. Here's the commands I used: sudo rsync -avh --progress pi@192.168.2.200:/media/pi/MikeExternalStorage/MikeiTunes ~/Desktop/MikeiTunes After the data was moved off I reformatted the drive: Unmount the drive: sudo umount /dev/sba2 Wipe the drive and format sudo mkfs.ext4 /dev/sba Create a mount point sudo mkdir /media/TimeMachine && sudo chmod 777 /media/TimeMachine Add the table entry: sudo nano /etc/fstab, then /dev/sda /media/TimeMachine auto defaults 0 2. fstab => "File System Table" Add a label to the drive sudo e2label /dev/sda TimeMachine https://www.raspberrypi.org/forums/viewtopic.php?t=67896 Mount the drive sudo mount /dev/sda After this was done, I rsync'd the data back to the drive. In some cases I needed to use sudo on the Pi to avoid any permission issues. sudo rsync -avh --progress --rsync-path="sudo rsync" ~/Desktop/EmilyBackup pi@192.168.7.200:/media/ExternalStorage/ I found it helpful to use screen to manage long running sessions on the PI: sudo apt install screen screen to start a new session screen ls to list all sessions. Attach to a session like screen -r THE_ID tmux is also great for this (and probably better) I also needed a way to unzip some folders. There's not a built-in util that provides unzipping with a progress indicator. I found 7z which fits the bill: 7z x Projects_old.zip -o./. https://askubuntu.com/questions/909918/q-how-to-show-unzip-progress

Summary: don't use HFS/mac formatted hard drives on linux!

A Note on Using Old Drives for Storage

I couldn't get my 2TB drive to pass the SMART monitoring (details in a future blog post!) 'long' test. I did a bit of research and external drives tend to only last ~5 years. The 2TB drive was ~10 years old and had exhibiting some glitchy behavior. I ended up replacing it with a much smaller 2TB drive for $60.

This is all to say: it's worth replacing drives every 5 years or so and ensure they are monitoring by SMART to catch any failures early on.

Fixing Raspberry Pi's Emergency Mode

I connected the new drive to replace my old one, formatted it, and setup the fstab config just like the other drives. The UI started glitching out and I had two mounts setup for my old drive. I figured restarting the Pi would fix the issue, but that was a bad idea.

The Pi wouldn't connect to the network and appeared dead. I tried unplugging the drives, but that didn't help.

I ended up having to plug it back into a monitor and found the following message:

You are in emergency mode. After logging in, "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to try again to boot into default mode.

Here's what I found:

If any of the devices in fstab cannot be found, it will hang the boot process and you'll be kicked into emergency mode. This was surprising to me: looks like linux is not too forgiving with bad configuration. Here's how to fix the issue: Plug your SD card into another computer, edit cmdline.txt on the root of the card, and add init=/bin/sh to the end of it. Looks like the Pi reads that txt file to determine how to boot. I believe this is a Pi-specific config file. Plug the SD card back into the pi, run mount -o remount,rw / when the prompt appears and comment out custom lines in /etc/fstab. Reboot the Pi and you'll be back in action. I ran journalctl -xb but couldn't find any errors specifically identifying the drive. /var/log/syslog is also a good place to look. sudo findmnt --verify --verbose is a way to verify your fstab config If you specify default,nofail in fstab it looks like you can avoid this problem. I'm not sure what the side effects of this approach is. I don't understand why fstab definitions are necessary if the default drive config is working fine. All drives automount when connected. I ended up removing all fstab entries and using the autogenerated mount points at /media/pi.

Resources:

http://www.clarkle.com/notes/emergecy-mode-bad-fstab/ https://www.raspberrypi.org/forums/viewtopic.php?t=193153 https://unix.stackexchange.com/questions/44027/how-to-fix-boot-failure-due-to-incorrect-fstab Under Voltage & USB-Powered Devices

When I replaced my old hard drive, I grabbed a USB powered one off of Amazon. However, the Pi can only support powering a single external drive drive.

You can determine if this is happening by searching the syslogs:

cat /var/log/kern.log | grep -i 'voltage'

Some references:

https://www.raspberrypi.org/forums/viewtopic.php?t=160819 https://github.com/raspberrypi/linux/pull/2397

The solution is buying a USB hub that is externally powered, like this one.

Spotlight Indexing on NAS

It's not possible:

https://discussions.apple.com/thread/130462?tstart=0 https://www.raspberrypi.org/forums/viewtopic.php?t=270155 https://care.qumulo.com/hc/en-us/articles/115008514788-Mac-OS-X-Spotlight-Search-and-Qumulo

As an aside, I've also learned it's not possible to exclude folders with a specific pattern (such as node_modules) or with a specific dot file within the folder (.metadata-no-index). You can only control what's indexed via the control panel.

Wow, this seems like a lot of work? Was this even a good idea?

Yes, it was. Took way more time than I expected. Probably not a great idea! If you just want to get a networked time machine up and running quickly, I wouldn't do this.

But...I learned a bunch, which was the fun part for me.

Why is Linux still hard to use?

Way back before Heroku & AWS were a thing, I used to manage server config for various apps I developed. It was a massive pain. I remember clearly staring blankly into my terminal editing files in /etc/* as instructed by obscure blog posts across the internet and hoping things worked. Once I had things working, I left them alone.

Now, to be sure, things have gotten better. Ansible, Terraform, CDK, etc all allow you to configure servers and cloud services with code rather than manually editing files. However, these abstractions are simply that—abstractions. Many times you'll run into issues with the underlying system config that you need to correct.

The Pi experience, which I'm assuming mirrors the general state of Linux config in general, is really bad. I forgot how incredibly valuable it is to have sane, smart defaults configured on MacOS that is tailored to the hardware which it's running on. A Given the slow decay of Mac devices (high hopes for Apple Silicon, but overall Apple machines have gotten worse over the years), I've thought about moving to Linux, but this experience has eliminated that thought from my mind.

Maybe some of this pain can be chalked up to the Pi OS, but I can't imagine things are many orders-of-magnitude better on other Linux variants. I hope I'm wrong, and I hope Linux desktops can eventually get to the 'just works' state that MacOS maintains.

Continue Reading

My Process for Intentional Learning

Lately, I've been able to carve out dedicated to learning new skills. What I've learned has been random, from programming languages to how to build a tiny house. I've found a lot of joy in learning new skills, slowly becoming a generalist.

Over the last year, I've found you can optimize your "learning time" by thinking through the process of learning before you start. In my experience, picking a learning project, and creating a "learning log" for each skill is hugely helpful.

Identify a Learning Project

Learning in a vacuum doesn't work for me.

I love reading fiction, but reading a topic that I have no immediate need to understand makes it much harder to comprehend. When I'm motivated by a problem I'm trying to solve, I can plow through books and other information quickly. Without an immediate need, I'll read the same page many times or fall asleep with the book in my hand.

In other words, learning something Just in Case doesn't work for me. It has to be Just in Time.

This is why a 'learning project' is really important. A small, useful, and preferably time-bound project that requires new skills to complete. The project is a forcing function for learning new skills. You want a project where the pain of leaving it half-done is painful.

For example, when our second daughter was born, I knew she would need the room in our house that I was using as an office (I work remotely). I could move into a room in our basement, but I loved having a large window in the room and for some reason, I didn't want to work in a basement. So, I decided to build a tiny house to work in.

I'd never built any physical thing in my life before.

I knew I'd lose motivation once I started it (especially as the Colorado summer heat ramped up). I ordered a massive truckload of wood and dumped it in my driveway and built the initial foundation. I knew our new daughter would need my room at the end of the summer and it would become too cold to make real progress on it by October.

These factors created enough motivation to force me to finish the project when I didn't want to. I'm glad I did! By building a mini house I learned most of the handyman skills I've been wanting to learn for years—the perfect learning project.

Before jumping into learning something new, take some time in picking your learning project.

For instance, let's say you wanted to learn software programming. You could take a bunch of online courses or start reading random tutorials online. You could spend a bunch of money on a coding bootcamp, or join something like Lambda School.

However, you could also find a a simple job on UpWork that feels simple & small enough for you to figure out. This provides a context and specific application for your learnings and the extrinsic motivation to finish the work (there's someone on the internet trusting you to get this thing done for their business).

Structure Your Learning

After you've picked a project, I've found it to be helpful to structure your learning process by asking some questions (here's a post that roughly follows this structure):

What's your learning project? Example: build a tiny house or automatically mark RSS articles as read What does success look like? This prevents you from following rabbit holes and forces you to finish the project. Example: build an insulated tiny house (not painted, not drywalled) or a script which marks articles more than two weeks old as read. What 'open questions' do you have? What are the gaps in your knowledge that would prevent you from completing the project? Write these down at the top of the document. What tools are you missing? This won't be apparent to you at the outset, but as you start learning you'll find friction in your process that you'll want to eliminate. For instance, I found that the hammer I had was hard to use. I noted this down and found that $10 bought me a much better hammer. Or, in the context of programming, your IDE autocomplete may not be working in the language you are learning. What are some of the top books, tutorials, YouTube channels, etc that align most closely with what you are trying to do? What completed pieces of work are similar to what you are trying to do? For digital projects, this could be open source projects or raw asset files for a media project. Is there a community (online or otherwise) around the thing you are learning? Documenting the places where friendly people on the internet, who are obsessed with what you are learning, is super helpful. You'll remember to ask them a question when you get stuck!

With this information in place, I start working on the project. As questions come to mind I write them down in a "learning log"—bullets in a document. If there's a large piece of knowledge or tool that's missing I'll add it to the top of the document and handle it later.

I've found that this live-blogging style learning log helpful, even if no one reads it. By writing down questions and problems that are coming to mind as I'm learning, it forces me to clarify and refine my thinking. This often helps me solve a problem quickly. Writing down the question helps prompt my mind to provide better & unique answers.

As a meta-point, by writing down this little guide it helped me better structure my learning process for my next project!

Continue Reading