I'm a big fan of building things from the bottom up, exposing myself to as much of the nitty-gritty as possible. Obviously there's a limit on each project, but it's almost always defined for me as one step more than I'm already comfortable with (e.g. this isn't going to be a post about building a REST API using assembly).
Over the last several months, I've been slowly building up the list of services that I'm self-hosting, where the process looks something like:
...it is a fairly involved process, but pretty rote/not much creativity required here, now that I have the foundation built up already. Recently, I had a need for a little more customized functionality than what I was getting from other people's work. I decided it was a good time to jump down another level, into building my own container. Specifically, I wanted Rust (and as minimal extra stuff as possible), I wanted to be a bit well-managed (more on that later), and I wanted reusability (general enough that I can solve different things just with this one container).
The Process
I decided to start with an REST API, and after a bit of research, specifically axum
instead of warp. I like that it's built + maintained by the
tokio team, it's a "relatively thin layer on top of hyper
", and that it (seems
to) have a high ceiling for what you can do with it. This API would be running in a Docker container that also plugs
into one of my compose stacks, configured similarly. What that means is I wanted it to follow the precedent of the other
20+ services, including things like:
- semver tagging (0.0.1, 0.8.4, etc): better practice imo to not use
latest
for Docker images, why should my own image be any different? - environment variables for controlling behavior (port, timezone, log level, etc): I so so so appreciate when I don't have to hardcode the same port value into two different files (the compose file and a separate config file for that app, or worse, the UI). I put it somewhere once, all the config can be derived from it, and it's easy to find.
The One Step More, First
So I began with getting Rust (and just Rust) into a Docker container. Easy, there's an official image, but those look like they get big without some playing around (gigabytes?? No thanks):
1 # Required args to build.
2 ARG RUST_VERSION
3 ARG NAME
4 ARG CMD
5
6 # First, copy the files into the image and build inside.
7 FROM rust:${RUST_VERSION} AS builder
8 ARG NAME
9 WORKDIR /srv/${NAME}
10 COPY . .
11 RUN cargo install --path .
12
13 # Then copy the binary into a slimmed image.
14 FROM debian:bookworm-slim
15 ARG CMD
16 ENV CMD=${CMD}
17 RUN apt update && rm -rf /var/lib/apt/lists/*
18 COPY --from=builder /usr/local/cargo/bin/${CMD} /usr/local/bin/${CMD}
19 CMD "${CMD}"
A little bit of explanation:
- a couple iterations led to pulling out the args at the top.
- same principle, if I want to upgrade the Rust version for example (pretty likely in the future), it should be as easy as possible to find. If I want to reuse this for a second project for some reason (not super unlikely), I don't need to re-familiarize myself/mentally grep for what to change.
- this also came about because I couldn't easily set these values dynamically in the
Dockerfile
. I wanted the file to automatically determineNAME
through the directory it lived under, but even with shell commands embedded in, it struggled to be neat.
- the reason
NAME
andCMD
aren't the same is that I named the project folder with the URL it was going to live at (example.irith.dev
). Except Cargo didn't like that, so the actual package name inCargo.toml
used dashes instead (example-irith-dev
). - I did want to use
alpine
instead ofdebian:bookworm-slim
. When it started delving into swapping libraries in the image tomusl
and other changes, I decided the effort to image size tradeoff wasn't worth it.
I'm also a bit of a minimalist, so it was unfortunate that these iterasimply pluggingeding another file just to run the
Dockerfile. But it did end up serving more purpose than simply plugging values into the Dockerfile
:
1 # Set + derive variables for re-use.
2 NAME := $(notdir $(CURDIR))
3 TAG ?= 0.0.1
4 RUST_VERSION ?= 1.87
5 CMD := $(shell echo "$(NAME)" | sed "s|\.|-|g")
6 PORT ?= 10000
7
8
9 # Default stage.
10 build:
11 docker build \
12 -t "${NAME}:${TAG}" \
13 --build-arg "RUST_VERSION=${RUST_VERSION}" \
14 --build-arg "NAME=${NAME}" \
15 --build-arg "CMD=${CMD}" \
16 .
17
18 test:
19 docker run \
20 -it --rm \
21 -e "PORT=${PORT}" \
22 -p "${PORT}:${PORT}" \
23 --name "${NAME}" \
24 "${NAME}:${TAG}"
- the same args are at the top (along with some others), dynamically deriving a couple of them and keeping them all in one easy-to-find place.
- I get to list out commands relevant to the project that:
- are pretty verbose to type out each time
- can reuse those same variables easily
- I would use fairly often
- when, inevitably, I come back eons later with the question "what was that one-liner I used to test the container?" and no memory to reply, the file answers back pretty quickly.
These two files together get to the point of "well-managed" I talked about a bit earlier. There is a pretty defined
process for how the image is compiled, and built-in paths for making changes to the image when that inevitably happens.
There's a bit of prediction in what those changes may be ("I'll probably upgrade the Rust version as some point", or
"I'm definitely going to be iterating enough to use semver and tag those changes accordingly", or even "I could end up
reusing this flow for a different Rust project"). But there isn't crazy generalization for situations I probably won't
get into ("podman
instead of docker
"? Better make a var for that). Again, that effort to potential benefit gets into
"not worth anymore". With these two files, and a boilerplate hello-world main.rs
, I get:
...a working Docker container with just the necessary stuff. For the curious, this image came out to 75.3MB!
The Comfortable, Second
Afterwards, I set up axum
with a basic /health
endpoint:
1 use anyhow::Result;
2 use axum::{
3 routing::get,
4 Router,
5 };
6 use tokio::{
7 net::TcpListener,
8 signal,
9 };
10
11 use std::env;
12
13
14 #[tokio::main]
15 async fn main() -> Result<()> {
16 let port = env::var("PORT")
17 .ok()
18 .and_then(|p| p.parse::<u16>().ok())
19 .unwrap_or(10000);
20
21 println!("Starting server on port `{}`...", port);
22 let router = Router::new()
23 .with_graceful_shutdown(setup_shutdown())
24 .route("/health", get(health));
25
26 let listener = TcpListener::bind(("0.0.0.0", port)).await?;
27 axum::serve(listener, router).await?;
28
29 Ok(())
30 }
31
32 /// Attach handlers for any shutdown signals.
33 async fn setup_shutdown() {
34 let ctrl_c = async {
35 signal::ctrl_c()
36 .await
37 .expect("Failed to install ctrl-c handler");
38 };
39
40 #[cfg(unix)]
41 let terminate = async {
42 signal::unix::signal(signal::unix::SignalKind::terminate())
43 .expect("Failed to install terminate signal handler")
44 .recv()
45 .await;
46 };
47
48 tokio::select! {
49 _ = ctrl_c => println!("Received ctrl-c..."),
50 _ = terminate => println!("Received terminate signal..."),
51 }
52 }
53
54 /// Basic endpoint to check API status.
55 async fn health() -> &'static str {
56 "I'm healthy!"
57 }
- I always use anyhow, I think it's a fantastic library for making errors super easy to handle.
- the
PORT
environment variable is read + used (the precedent mentioned above: configure once, pass everywhere). - I like the extra logging on signal interrupts. The explicit "hey, I saw your message, I'm shutting down" ack is reassuring. Plus, as someone who spends too much time debugging systems I didn't write, I appreciate a little extra verbosity in my logs (better to have it when I need it than adding it in after something happens).
Pretty straightforward! We run make build test
, send a test curl
:
$ curl -i http://localhost:10000/health
HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
content-length: 12
date: Wed, 21 May 2025 04:07:27 GMT
I'm healthy!⏎
...and we're good! Now I'm free to build out however many endpoints in a modular way, and even extend beyond REST (I'm expecting tokio
to come in handy for consuming events from the Docker socket).