I have an older intel MacBook (2016, 2.9ghz) that I use for personal projects. My corporate machine is an M1 Macbook Pro and I love it, but I’ve been holding off on replacing my personal machine until the pro M2 comes out (hopefully soon!).
I love playing with new technology, especially developer tools, and when I got accepted to the codespace beta I couldn’t resist tinkering with it. To speed up my ancient MacBook, try some new tech, and have the ability to learn more ML/AI tooling in the future.
I largely agree with this analysis.
Codespaces are very cool. They work better than I expected—it felt like I was developing on a local machine. Given how expensive the sticker pricing is, I don’t get why you wouldn’t just buy a more powerful local machine in a corporate setting (codespaces is free for open source work). I can’t see devs being ok with a Chromebook vs MacBook pro, so the cost savings aren’t there (i.e. buy a cheaper machine and put the savings into rented codespace).
You could run a similar dockerized setup locally on the MacBook if you wanted to normalize the dev environment (which is a big benefit, esp in larger orgs). I think this is one of the best benefits of codespaces—completely documented and normalizing your development environment so it’s portable across machines.
Here are some notes & thoughts on my experience with codespaces:
- Codespace is essentially a docker image running on a VM in the cloud wired up to your VS code local installation in a way that makes your experience feel like you aren’t using a remote machine.
gh pr view --web, etc all work (i.e. opens a local browser) and integrate with macOS. They’ve done a decent job integrating codespaces into the native experience so you forgot
- If you are curious, this is done by a magic environment variable:
- If you are curious, this is done by a magic environment variable:
Add Development Container Configurationis the command you need to run to autogen the default
.devcontainer/config for your codespace.
- Your dotfiles are magically cloned to
- File system changes are not instantly updated in the file explorer. There is a slight delay, which is frustrating.
- It looks like there is a reference that has emerged after the initial beta. Lots of examples/open source code still references some of the old stuff, so you’ll have to be careful not to cargo-cult everything if you want to build things in the latest style that will be resilient to changes.
/workspaces/.codespaces/shared/.envhas a bunch of tokens and context about the environment.
- You can have multiple windows/editors against multiple folders. You can do this by cloning additional folders to
/workspacesand then run
cd‘d in that folder.
- Terminal state is not restored when a codespace is paused
- Codespace logs are persisted to
/workspaces/.codespaces/.persistedshare/EnvironmentLogbackup.txt. You can also access them via the cli
gh codespace logs
- Some of the utilities used to communicate with your local installation of vscode are located in
~/.vscode-remote/bin/[unique sha]/bin/. It’s interesting to poke around and understand how client communication works.
/workspaces/.codespaces/shared/.env-secretscontains github credentials, and other important secrets.
CODESPACE_VSCODE_FOLDERis not setup in
- If you’ve used the remote SSH development, much of the magic that makes that work is used in a codespace. There’s a hidden
.vscodefolder installed on the remote machine and some binaries which run there to make VS Code work properly.
I couldn’t find clear documentation on the load order: when does your code get copied to the container, when do all of the VS code tools startup on the machine, etc. https://containers.dev/implementors/spec/ for the general devcontainer specification, but it’s not too helpful.
- Dockerfile. Your application code does not exist,
featuresare not installed.
- Features (like
brew). Each feature is effectively a bundle of shell scripts that are executed serially. Application code does not exist at this point.
- Post Install. Dockerfile is built, features are installed, application code exists, dotfiles are not installed.
- Dotfiles. At this step (and all previous steps),
code(vs code cli) does not exist and has not yet been installed.
- Sometime after this the
codebinary is installed and some of the daemon-like processes that run on the remote machine are started up. From what I can tell, there’s not a single-run lifecycle hook that you can use at this stage.
ASDF: Version Manager for Everything
I really like
The devcontainer image examples had a completely different runtime for each major language. What if you use multiple languages? What if your environment is more custom?
I thought it would make sense to try to use
asdf across all projects, as opposed to language-specific builds.
- If you install
asdfvia homebrew it will throw asdf installation files in
- Many tools, including ElixirLS assume that the full installation exists in
~/.asdf. This caused issues on the codespace, it seems as though the shell script to start ElixirLS was not using the default shell and did not seem to be sourcing standard environment variables. I’m guessing depending on how the extension is built it does not properly run in
- I ran into weird issues with
poetry run pyright .returned zero errors, while running
poetry shelltriggered a lot of errors relating to missing imports (related issue).
- Erlang uses devcontainers and asdf, which is a good place to look for examples.
Here’s the image I ended up building and it’s been working great across a couple of projects.
Docker Compose & Docker-in-Docker
Using docker compose (to run postgres, redis, etc) is super helpful but is not straightforward. Here’s how I got it working:
- You can specify a
docker-compose.ymlfile to be used in your
devcontainer.json. This seems like a great idea until you realize that you can’t manage the other services that are started through the compose definition at all. You are "trapped" inside your application container and cannot inspect or manage the other processes at all.
- The more flexible approach is to install docker inside a single container. This requires a bit more setup, specifically passing additional flags to the parent docker container in order to be able to run docker.
My dotfiles are very well documented, but were not ready for codespaces. I needed to do some work to separate out the macos specific stuff from the cross-platform compatible tools.
- Here’s a great guide on how to get your dotfiles setup
- Thankfully, brew works on linux and has a really easy integration within codespaces. This made my life easier since my dotfiles are built around brew.
- Pull out packages that are system-agnostic and stick them in a
Brewfile. Here’s mine.
- Create an install script specifically for codespaces. Here’s what mine looks like.
VS Code Extensions
Sync Settings extensions are not installed automatically. You have to specify which extensions you want installed on the codespace through a separate configuration
Homebrew Installation Failure
Due to old packages (or old apt-get state, not sure which) installed on the image. If you use a raw base image for your codespace, you need to ensure you run
apt-get update in order for homebrew install to work properly.
Another alternative is using the
dev- variant of many of the base images (here’s an example).
It looks like the codespace machine calls some sort of GH API to power the GPG signing. If you have a
.gitconfig in your dotfiles, it will overwrite the custom settings GitHub creates when generating the codespace machine. You’ll run into errors writing commits in this scenario.
Here’s what you need to do to fix the issue:
git config --global credential.helper /.codespaces/bin/gitcredential_github.sh git config --global gpg.program /.codespaces/bin/gh-gpgsign
You’ll also want to ensure that GPG signing is enabled for the repository you are working in. If it’s not, you’ll get the following error:
error: gpg failed to sign the data fatal: failed to write commit object
You can ensure you’ve allowed GPG access by going to your codespace settings and looking at the "GPG Verification" header.
As an aside, this was an interesting post detailing out how to debug git & gpg errors.
Awk, and other tools
The version of awk on some of the base machines seems old or significantly different than the macOS version. It wouldn’t even respond to
awk --version. I installed the latest version via homebrew and it fixed an issue I was having with
git fuzzy log
where no commit found on line would be displayed when viewing the commit history.
I imagine other packages are old or have strange versions installed too. If you run into issues with tooling in your dotfiles that work locally, try updating underlying packages.
Here are some useful shell commands to make integrating cs with your local dev environment more simple.
# gh cli does not provide an easy way to pull the codespace machine name when inside a repo targetMachine=$(gh codespace list --repo iloveitaly/$(gh repo view --json name | jq -r ".name") --json name | jq -r '..name') # copy files from local to remote machine. Note that `$RepositoryName` is a magic variable that is substituted by the gh cli gh codespace cp -e -c $targetMachine ./local_file 'remote:/workspaces/$RepositoryName/remote_file' # create a new codespace for the current repo in the pwd gh alias set cs-create --shell 'gh cs create --repo $(gh repo view --json nameWithOwner | jq -r .nameWithOwner)'
Unsupported CLI Tooling
Here are some gotchas I ran into with my tooling:
- zsh-notify. Macos popup when a command completes won’t work anymore.
- pbcopy/pbpaste doesn’t work in the terminal.
- You lose all of your existing shell history. There are some neat tools out there to sync shell history across machines, might be a way to fix this.
- Is there more control available for codespaces generated by a pull request? Ideally, you could have a script that would run to generate sample data, spin up a web server, etc and make that web server available to the public internet in some secure way. I think vercel does this in some way, but it would be neat if this was built into GitHub, tied into VS Code, and allowed for a high level of control.
- I’m still in the process of learning/mastering tmux, there seemed to be some incompatibilities that I’ll need to work around.
- cmd+f within the integrated shell doesn’t search through scroll buffer
- clipboard integration doesn’t work (main reason for using tmux is keyboard scroll-buffer search and copy/paste support)
- pbcopy/pbpaste, which I use pretty often, doesn’t work. A good option is using something like Uniclip, but this will require some additional effort to get working. Other alternatives that might be worth investigating:
- I had trouble with some specific VS Code tasks not working properly. This was due to how some tasks build the shell environment.
- Can you run github actions locally within the codespace? This would be super cool.
- Looks like it’s not possible right now, but there’s some open source tooling around this which looks interesting.
- There’s got to be a cleaner way to sharing a consistent ssh key with a codespace for deploys. This post had some notes around this.
- I’m not sure how the timeout works. What if I’m running a long-running test or some other terminal process? Will it be terminated? Is there a way to keepalive the session in some other side process?
- Can you mount the remote drive locally and have it available in the finder?
scping files to view and manipulate locally is going to get tired fast.