How I Build and Run This Website January 15, 2026
I run this blog with Org Mode, Emacs, Docker, and Cloud Run. No framework, no build tool ecosystem. Just plain text files and a small pipeline that I understand end to end.
The Problem
I wanted a static site that was easy to maintain and gave me full control of the publishing pipeline. Most generators felt heavy for a small blog. Jekyll, Hugo, Gatsby: new templating languages, config files, and mental overhead.
I already had a tool that turns my notes into documents: Emacs.
Why Org Mode?
I already write in Org mode for notes and docs. It is readable as plain text and exports cleanly to HTML or PDF.
For a blog, that is ideal. I write in the same format I use every day, and Emacs converts it to HTML. No new syntax or YAML frontmatter.
Here is the source for a typical post:
\#+begin<sub>src</sub> org
Introduction
Some text here…
A Subsection
More text with a code snippet inline.
def hello(): print("hello world")
\#+end<sub>src</sub>
That is it. No build config, no plugins, just Org mode.
Building with Emacs
The build step is a single Emacs batch command. No daemon, no interactive session.
emacs --batch "post.html" \ --eval "(setq org-export-with-toc nil)" \ --eval "(setq org-html-postamble nil)" \ --eval "(setq org-export-with-section-numbers nil)" \ --funcall org-html-export-to-html
This disables the table of contents, removes the default postamble, and turns off section numbers. The output is clean HTML that I style with one CSS file.
It also runs anywhere Emacs runs: laptop, CI, Raspberry Pi. No Node, Ruby, or Python dependencies.
The Run Script
I keep a single run script at the repo root that acts as a task runner. It is just a bash file with functions.
#!/usr/bin/env bash
set -o errexit
set -o pipefail
function _build {
echo "Building site..."
rm -rf docs && mkdir -p docs
# Copy static assets
rsync -a --exclude='*.org' --exclude='run' ./ docs/
# Convert org files to HTML
find . -name "*.html" -print | while read -r org_file; do
emacs --batch "${org_file}" \
--eval "(setq org-export-with-toc nil)" \
--funcall org-html-export-to-html
mv "${org_file%.org}.html" docs/
done
}
function build_docker {
_build
docker buildx build --platform linux/amd64 -t mysite:latest .
}
function run_site_local {
docker run --rm -p 8080:8080 mysite:latest
}
# ... more functions
"${@:-help}"
The last line means: run the function passed as args, or default to help. So ./run build_docker calls build_docker, and ./run shows available commands.
Why I like this pattern:
- Works the same locally and in CI
- New contributors can read one file to see every command
- Commands compose cleanly (
build_dockercalls_build) - No external task runner
┌─────────────────────────────────────────────────────────┐ │ ./run script │ ├─────────────────────────────────────────────────────────┤ │ │ │ ./run _build → Build HTML from org files │ │ ./run build_docker → Build site + Docker image │ │ ./run run_site_local → Run container locally │ │ ./run tag_site → Tag for GCP registry │ │ ./run deploy → Push + deploy to Cloud Run │ │ ./run ship → Do everything │ │ │ └─────────────────────────────────────────────────────────┘
Docker
The Docker setup is minimal:
FROM nginx:alpine COPY ./docs /usr/share/nginx/html EXPOSE 8080 RUN sed -i 's/listen 80;/listen 8080;/' /etc/nginx/conf.d/default.conf
Four lines. Copy the built HTML into nginx, expose port 8080, and update the config to listen there.
The nginx:alpine image is tiny. The whole site adds a few more MB. Fast to build, fast to push, fast to start.
The CI/CD Pipeline
GitHub Actions runs two workflows:
Build on PR
On pull requests, it builds the site and Docker image to verify nothing is broken.
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Emacs
run: sudo apt-get install -y emacs
- name: Build Docker image
run: |
./run _build
docker buildx build --platform linux/amd64 -t mysite:latest .
Build and Deploy on Push to Master
On merge to master, it builds the site, pushes the Docker image to Artifact Registry, and deploys to Cloud Run.
on:
push:
branches: [master]
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Emacs
run: sudo apt-get install -y emacs
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
- name: Build and push
run: |
./run _build
docker buildx build --platform linux/amd64 -t mysite:latest .
./run tag_site
docker push $FULL_IMAGE_PATH
- name: Deploy to Cloud Run
run: ./run deploy
The key point: the same run script works locally and in CI.
GCP Infrastructure
Two GCP services do the heavy lifting:
Artifact Registry
This is where Docker images live. It is GCP's version of Docker Hub, but private and integrated with Cloud Run.
# Tag the image for the registry docker tag mysite:latest \ europe-west1-docker.pkg.dev/my-project/my-repo/mysite:latest # Push it docker push europe-west1-docker.pkg.dev/my-project/my-repo/mysite:latest
Cloud Run
Cloud Run runs the container and can scale to zero. If nobody visits, I pay nothing for compute.
gcloud run deploy mysite \ --image $FULL_IMAGE_PATH \ --region europe-west1 \ --allow-unauthenticated \ --min-instances 0 \ --max-instances 3 \ --memory 128Mi
New images are picked up automatically with no manual restarts.
How It All Fits Together
Here is the full flow:
THE BUILD PIPELINE ┌──────────────┐ │ Write post │ Write in org mode (post.org) │ in Emacs │ └──────┬───────┘ │ ▼ ┌──────────────┐ │ git push │ Push to GitHub (master branch) └──────┬───────┘ │ ▼ ┌──────────────────────────────────────────────────────┐ │ GitHub Actions │ │ │ │ 1. Install Emacs │ │ 2. Run ./run _build (org → HTML) │ │ 3. Build Docker image │ │ 4. Push to Artifact Registry │ │ 5. Deploy to Cloud Run │ │ │ └──────────────────────┬───────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────┐ │ Google Cloud Platform │ │ │ │ ┌─────────────────┐ ┌─────────────────┐ │ │ │ Artifact │ ───▶ │ Cloud Run │ │ │ │ Registry │ │ │ │ │ │ │ │ Auto-deploys │ │ │ │ (Docker images) │ │ new images │ │ │ └─────────────────┘ └────────┬────────┘ │ │ │ │ └────────────────────────────────────┼───────────────┘ │ ▼ ┌─────────────────┐ │ adamfallon.com │ │ │ │ (scales 0-3) │ └─────────────────┘
Local Development
For local development, I run the Docker container:
./run build-and-run-local
It builds the site, creates a Docker image, and starts nginx on port 8080. Open http://localhost:8080 to preview exactly what will be deployed.
Sometimes I skip Docker and open the HTML files directly. Since it is all static HTML, that works for quick checks.
What I Like About This Setup
It uses boring tech
Emacs, nginx, and Docker are old and stable. That is a feature. I do not want weekend debugging of build tool updates.
Everything is in the repo
The entire build process lives in the run script and GitHub Actions workflows. In five years, I can rebuild by reading those files.
It scales to zero
Cloud Run with min-instances 0 means no compute costs when nobody visits.
It is fast
No JS framework, no hydration, no client-side routing. Pages load quickly, even on slow connections.
What I Would Change
- Org-to-HTML export could look better out of the box. I wrote CSS to make it presentable.
- Emacs startup time adds up when exporting many files. For a huge site, that might be annoying.
Recap
The setup is:
- Write posts in Org mode
- Convert with Emacs batch mode
- Package in a tiny nginx Docker image
- Build and deploy via GitHub Actions
- Host on Cloud Run with automatic scaling
If you need comments, analytics, or dynamic content, you will need extra services. But for a simple blog, this is reliable and easy to maintain.
If you enjoyed this, you might also like my post on docker send email notifications which uses a similar approach with Docker and bash scripts.