Raspberry Pi Configuration Tips & Tricks
Tags: raspberry-pi • Categories: Learning
Through an unfortunate turn of events, my SD card of my raspberry pi got corrupted. Apparently this happens quite often: SD cards are not designed for constant activity like a standard SSD drive (this was news to me!).
This time around, I decided to run many of the applications I put on the pi in docker containers (here they are), so it’s all self-documented. Below are notes on what I learned while setting up the pi again, and some misc devops-style tips & tricks that would be useful in any linux server environment. The nice thing about a pi is it gives you the excuse to learn about interesting linux internals.
(I’ve written about my raspberry pi setup process in the past, if you want to read the precursor to this post).
Installing Docker on Pi
You want the latest version of docker, not the one that is available through the standard package install.
curl -fsSL https://get.docker.com -o get-docker.sh
sudo su get-docker.sh
sudo usermod -aG docker pi
This will get grab the latest version of docker compose
, which is very helpful for easily managing all of your docker containers.
You can run this script again to update docker; there’s not a more streamlined update process.
Additional Utilities
You’ll want to install some additional tools to make your life easier on the pi.
sudo apt-get install dnsutils ripgrep httpie fd-find fdupes
- In order to use dig (helpful for pi-hole related debugging), you’ll need dnsutils, which is why I’ve included it above.
fd
isfdfind
on pi. Not sure whyfd
isn’t used like on macOS.- I find
http
way better thancurl
Upgrading Utilities
Here’s how to upgrade a specific CLI tool:
sudo apt-get update
sudo apt-get upgrade git
Increase Swap Size
You’ll need this for hosting stuff. This is a great guide. I had to explicitly set the max swap size as well for this to work for me.
Ports in Use
Query which ports are actively bound to on the machine. Helpful for debugging misc hosting issues.
ss -ta
netstat -ltpc
Limit System Log Size
Avoid the system log eating up your storage. I logged in one day and nearly all my storage was used up and the pi was struggling with basic operations.
journalctl --vacuum-size=500M
Docker Log Rotation & Size Limit
By default, logs are not bounded in docker. You need to configure the log engine via daemon.json
before creating a container in order for logs to automatically be truncated.
If the file doesn’t exist, you aren’t in the wrong place: create it.
nano /etc/docker/daemon.json
and then add this content:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m"
}
}
Run Pihole via Docker
You can run pihole (the original use for my pi) in a docker container. I built a custom image to custom the pi installation with automated daily blocking/allowing.
Setting up Time Machine
I looked into hosting a timemachine drive via docker but there was some additional complexity required due to the raspberry pi host having Avahi running by default. I decided not to fight the defaults and instead run the time machine hosting directly on the machine.
sudo apt-get --assume-yes install netatalk
Configure drives via /etc/netatalk/afp.conf
:
[Global]
mimic model = TimeCapsule6,106
[ExternalStorage]
path = /media/pi/ExternalStorage
[TimeMachine]
path = /media/pi/TimeMachine
time machine = yes
Edit a line in /etc/nsswitch.conf
:
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns
Finally, restart the service:
service netatalk restart
Time Machine Stops Working Due To Read Error
No idea why this started happening, but it manifested itself as a write error in Time Machine. Via ssh I tested writing to the drive:
touch: cannot touch 'hello': Read-only file system
I decided to unmount and remount the drive. To do this I needed to determine what was using the drive:
lsof 2>/dev/null | rg TimeMachine
And turn off netatalk (AFP host):
sudo service netatalk stop
After removing all active uses of the drive, I could kick the drive:
sudo umount /media/pi/TimeMachine && sudo mount /media/pi/TimeMachine
Which fixed the issue.
Dynamic, Virtual Hostnames
Another fun thing I wanted to do was allow the various little apps I have running locally to be accessed via a local-only domain. There’s a neat project nginx-proxy which does just this.
What is most interesting about this project is docker-gen which communicates with a local docker API using a unix socket shared into the docker container running nginx-proxy
. You can think of docker-gen as watching for ‘docker webhooks’, rendering a template, and issuing a completion command. Very neat tool for dynamically generating system configuration.
In any case, this was pretty easy to get rolling. I ran into two main issues:
- If you have some images (like pihole) running as
network_mode: host
it won’t work using the default image. This PR will fix the problem - Add DNS entries for the local network. Easiest way to do this is to add a
/etc/hosts
style file to/etc/pihole/custom.list
. Example:192.168.7.35 storj.hole
Use UUIDs for Fstab Configuration
The /dev/sdb
mount locations on the pi do not seem to be stable. If you want consistent mountpoints use UUID=
references in your fstab:
UUID=70c6a837-06c7-4d28-ace1-5ad73980eb72 /media/pi/ExternalStorage ext4 rw,nosuid,nodev,relatime,errors=remount-ro 0 2
Restart MDNS
I still don’t fully understand mdns and how devices on the network find devices broadcasting a .local
address. I’ve found that my macbook sometimes can’t resolve the raspberrypi.local
domain. To fix this I need to restart the service on my pi managing this domain
sudo service avahi-daemon restart
I’m not sure why this happens over time. Hoping investigating the logs over time will help here. This command outputs all logs for the mdns service on pi since boot.
journalctl -u avahi-daemon -b
Printer Sharing Setup Using Cups
First, install CUPS:
sudo apt-get install cups
sudo usermod -a -G lpadmin pi
CUPS is only hosted on the local interface. Access it via http://localhost:631/ over VNC (by default, the port isn’t bound to a network-accessible interface). You can’t access it via raspberrypi.local
It’s unclear of the debian package includes the latest version of brlaser which is required for printing to work on my specific brother printer (which I recommend—it’s been reliable the last 5+ years with no issues):
git clone https://github.com/pdewacht/brlaser
cd brlaser
sudo apt-get install cmake libcupsimage2-dev libcups2-dev
cmake . && make && sudo make install
If you run into issues, restart everything:
sudo service cups-browsed restart
sudo service cups restart
Network & IO Monitoring on Pi
These tools are useful for top-like IO monitoring and network monitoring:
sudo apt-get install nethogs iotop
Copying SSH Keys
On your local machine run the following to copy your public key and eliminate the need for password login:
ssh-copy-id -i ~/.ssh/id_rsa.pub pi@raspberrypi.local
Storage Usage Inspection
What is taking up all the space on your hard drive?
sudo du -sh /*
How much free space do you have?
df -h
Even better is ncdu
, which lets you interactively explore the filesystem and discover what is sucking up all of your space:
sudo apt-get install ncdu
ncdu / --exclude /media/pi
LXSession Logs Consuming Massive Disk Space
I ran into an issue where /home/pi/.cache/lxsession/LXDE-pi/run.log
was taking up all of my disk space.
It’s unclear why, but other people had the same issue and linking this log to /dev/null
seems like the answer:
ln -s /dev/null /home/pi/.cache/lxsession/LXDE-pi/run.log
It’s been a couple of months and this has worked just fine for me.
Removing Dependency Files
Every time I get a new computer I dump old projects I haven’t touched a while to my external storage. The issue with this approach, especially if you are using a backup solution like Arq to backup the external storage, is you end up with a ton of vendor files.
Here are some examples of the directories you probably want to remove from your backups:
- tmp/cache (rails)
- vendor/bundle (rails/ruby)
- node_modules (node)
__MACOSX
(macos)
Here’s how to dry run a removal:
fdfind --type d --full-path 'vendor/bundle$' --unrestricted -x echo "Would remove: {}" | tee ruby_bundle_remove.log
Now we can safely remove all vendor/bundle
directories:
fdfind --type d --full-path 'vendor/bundle$' --unrestricted -x sh -c 'echo "Removing: {}"; rm -rf "{}"'
Here’s how to optimize each git repo, which will reduce the number of files in the .git
repo. This will speed up Arq backups.
fdfind --unrestricted --type d --full-path '\.git$' --unrestricted -x sh -c 'echo "Optimizing {}"; cd "{}"; git gc --aggressive --prune=now'