Cloud-based Development with Docker & Mutagen
Remember the days we needed bulky, beefy machines to write software? Machines that could handle the load of our high-powered IDE, along with our backend app, our in-memory cache store(s), our DB, our message queue, our frontend service, its ancillary services, and… you get the point.
Do you remember the times where our first day on the job as an individual contributor often involved “getting your development environment set up”? Your new manager tells you to take a look at the README, only to find it hasn’t been updated since the last time someone else onboarded 6 months ago, and simply doesn’t work as advertised any more. Being the good citizen you are, you note the changes that need to be made, open a pull request to update the readme, and get it merged. Only to discover a couple months later—the next time someone onboards—your changes are now outdated.
Do you remember the times where language and framework version updates needed to be communicated and coordinated with every single engineer, times where you had to be an expert at rbenv, rvm, nvm, and <insert your favorite version manager here>?
As much as I hate to admit it, this is not a thing of the past. It turns out many of us still work this way: we spend way too much time worrying about the software and hardware dependencies of our applications. Time taken away from working on our bread and butter—the applications themselves.
Rails on Docker #
Docker is immutable, predictable, reproducible, and portable. Using Docker (or any containerization platform) for development has solved some of our problems. It has made it so we can focus on writing code, not on the underlying system our code will be executed on; it abstracts away the things we don’t care about and allows us to focus on generating value for our customers.
To demonstrate how software teams can leverage Docker to streamline the development process, let’s create a sample Rails application from scratch.
Initialize & containerize the project
In a new folder, initialize a minimal, PostgreSQL-backed Rails project, with a transitory ruby 3 container. I picked PostgreSQL because it is a DBMS I am familiar with, but feel free to substitute with your preference (You will need to make a few adjustments to the steps outlined below).
$ docker run \ --rm \ --interactive \ --tty \ --volume $(pwd):$(pwd) \ --workdir $(pwd) \ ruby:3.0 bash # Inside the transitory container $ gem install rails --version 7.0.1 $ rails new . --database=postgresql --minimal $ exit
After initializing the project, containerize the application by creating a Compose file with the two services required to run it. This step encapsulates our software dependencies.
# docker-compose.yml version: "3.8" services: web: image: ruby:3.0 container_name: web working_dir: /usr/src/app/ command: - "/bin/sh" - "-c" - "bundle install && exec bin/rails server --binding 0.0.0.0 --port 3000" environment: DATABASE_URL: postgres://web:password@pg/app_development depends_on: - pg volumes: - .:/usr/src/app/ ports: - 3000:3000 pg: image: postgres:14.1 container_name: pg environment: POSTGRES_PASSWORD: password POSTGRES_USER: web POSTGRES_DB: app_development
Run the application, navigate to http://localhost:3000 to confirm everything is working as expected:
$ docker-compose up -d # Give `web` a moment to install gems $ open http://localhost:3000
This looks good, however, every time we
docker-compose up our
web service always performs a fresh
i.e. gems installed in that step aren’t persisted between runs. Out the box
ruby:3.0 bundles gems in
/usr/local/bundle/. I’ll use a named volume mount at that directory to decrease startup time.
diff --git a/docker-compose.yml b/docker-compose.yml index 76e28a3..402e6aa 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -14,6 +14,7 @@ services: - pg volumes: - .:/usr/src/app/ + - gem_data:/usr/local/bundle/ ports: - 3000:3000 pg: @@ -23,3 +24,5 @@ services: POSTGRES_PASSWORD: password POSTGRES_USER: web POSTGRES_DB: app_development +volumes: + gem_data:
Note: Remember to run
docker-compose up -d after updating your Compose
file so changes can take effect.
For our PostgreSQL service we can do something similar. The
image stores its data in
diff --git a/docker-compose.yml b/docker-compose.yml index 402e6aa..c79ee6f --- a/docker-compose.yml +++ b/docker-compose.yml @@ -24,5 +24,8 @@ services: POSTGRES_PASSWORD: password POSTGRES_USER: web POSTGRES_DB: app_development + volumes: + - db_data:/var/lib/postgresql/data/ volumes: gem_data: + db_data:
Rails on Docker on Mutagen #
Now that we have an abstraction for our software dependencies, it would be nice if we could do something similar for our hardware. When we first started working on our project, a Macbook Air sufficed. However, as our application grew, we added more services and we now need beefier hardware. Since Docker uses a client-server architecture, we can run the Docker daemon remotely, allowing us to adjust the hardware our applications run on pretty effortlessly. Cloud-service providers like AWS, Azure, Google Cloud make provisioning servers really easy these days.
Before continuing, get rid of any Docker objects created in previous steps with a
docker-compose down --volumes. We
don’t need them any more since we’ll be recreating those same objects on our remote host.
Setting up a Remote Docker Host
If we wanted to, we could spin up our own EC2 instance and set up our remote Docker Host ourselves manually. However, for the sake of time, I’m going to use DigitalOcean to create a Docker Droplet running on Ubuntu 20.04.
Important: When setting up your server, make sure to set up SSH Authentication!
After you’ve got a server up and running, update the SSH configuration on your local machine for secure, easy access to it. Check out the ssh_config man page for more configuration options.
# ~/.ssh/config Host docker.remote HostName x.x.x.x User root StrictHostKeyChecking no ConnectTimeout 120
Now that you’ve configured your SSH client, you should be able to connect to your remote server. On DigitalOcean I noticed their firewall settings on port 22 are too strict, so I relaxed them a bit.
$ ssh docker.remote # On the Docker Host $ ufw status Status: active To Action From -- ------ ---- 22/tcp LIMIT Anywhere 22/tcp (v6) LIMIT Anywhere (v6) $ ufw allow 22/tcp Rule updated Rule updated (v6) $ ufw status Status: active To Action From -- ------ ---- 22/tcp ALLOW Anywhere 22/tcp (v6) ALLOW Anywhere (v6)
Remote File Sharing with Mutagen
Since our application will run remotely, we need the ability to synchronize source code from our local machine to the
Docker Host. Enter Mutagen, a tool that facilitates cloud-based development, that does precisely what we need.
But first, make sure you’ve installed Mutagen on your local machine. Next, create a session to synchronize the
source code (
.) to a directory on the remote Docker Host via SSH (
/usr/src/app/). For our purposes we’ll configure
Mutagen to ignore
tmp/ and any version control system directories (
.git/ in my case).
$ mutagen sync create . docker.remote:/usr/src/app/ \ --name code \ --sync-mode two-way-resolved \ --ignore /log/,/tmp/ \ --ignore-vcs
But that’s is a whole lot to type every time we want to start this sync session. Luckily Mutagen provides an
mutagen project, similar to Docker Compose and Kubernetes, to run scripts and to start/stop
sessions. Since we’re going to use
mutagen project for orchestration, terminate the previous sync with
terminate code. Then create a mutagen project file:
# mutagen.yml sync: code: alpha: "." beta: "docker.remote:/usr/src/app/" mode: "two-way-resolved" ignore: paths: [ "/log/", "/tmp/" ] vcs: true
Now we can start and terminate the sync session with
mutagen project start and
mutagen project terminate
Composing with a Remote Docker Host
Our source code is being synchronized! Next, we need to make sure
docker-compose commands are being executed against
our Docker Host. To do this create a
.env file that contains a custom entry for
# .env DOCKER_HOST="ssh://docker.remote"
Disclaimer: At time of writing Compose V2 (
docker compose without the hyphen) does not support setting a custom
.env. Compose V1, however, did—this is a known issue. V2 still respects inline environment
variables. One workaround is to create an alias:
alias dockerr='DOCKER_HOST="ssh://docker.remote" docker'.
In our Compose file, we also need to update web’s bind mount. Since our Docker host has changed, instead of bind
mounting the current directory (
.), we should bind mount the directory Mutagen is synchronizing files to.
diff --git a/docker-compose.yml b/docker-compose.yml index c79ee6f..233ac68 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -13,7 +13,7 @@ services: depends_on: - pg volumes: - - .:/usr/src/app/ + - /usr/src/app/:/usr/src/app/ - gem_data:/usr/local/bundle/ ports: - 3000:3000
If we run
docker-compose up -d once again, our web application is now accessible on port 3000 of our remote host.
Putting it all together
In addition to file synchronization, Mutagen provides network forwarding. This allows us to connect to any of
the Docker Host’s services as if they were running locally. Rather than accessing our web application through an IP
http://<ip>:3000/, we can access it via
diff --git a/mutagen.yml b/mutagen.yml index ba992a2..746c860 100644 --- a/mutagen.yml +++ b/mutagen.yml @@ -6,3 +6,14 @@ sync: ignore: paths: [ "/log/", "/tmp/" ] vcs: true + +forward: + web: + source: tcp:localhost:3000 + destination: docker.remote:tcp::3000
Lastly, now that sync and forward sessions are being orchestrated, we can use Mutagen Project hooks to run Docker Compose commands to bring our services up/down.
diff --git a/mutagen.yml b/mutagen.yml index 000c8a6..68a4e78 100644 --- a/mutagen.yml +++ b/mutagen.yml @@ -11,3 +11,9 @@ forward: web: source: tcp:localhost:3000 destination: docker.remote:tcp::3000 + +afterCreate: + - docker-compose up --detach + +beforeTerminate: + - docker-compose down
Goodbye, README! …for the most part, at least. To start the project, all we need to do is run
mutagen project start.
Conversely, to stop the project run
mutagen project terminate. By encapsulating your hardware and software
dependencies with Docker & Mutagen, you:
- reap the benefits of contanizeration including: immutability, reproducibility, predictibility and portability
- achieve environment uniformity (development, test, staging, production, etc)
- streamline the onboarding and feature development processes
- can say goodbye to version managers
- no longer need to carry a brick in your backpack
- no longer need to pay for the $3,000 brick in your backpack
- and most importantly, you can focus on generating value for your customers