Org Mode Static Site: How I Build This Blog with Emacs My Post Title
How I Build and Run This Website January 15, 2026
The Problem
I wanted an Org Mode static site that was simple to maintain but also gave me full control over the publishing pipeline. Most static site generators felt like overkill for what I needed - a few blog posts and some basic pages. Jekyll, Hugo, Gatsby - they all came with their own templating languages, config files, and mental overhead.
Then I remembered I already have a tool that's been converting my notes into documents for years: Emacs.
Why Org Mode?
I write everything in Org mode anyway. Meeting notes, task lists, documentation - it all lives in .org files. The format is readable as plain text but also exports cleanly to HTML, PDF, or whatever else you need.
For a blog, this is perfect. I write a post in the same format I use for everything else, and Emacs converts it to HTML. No new syntax to learn. No YAML frontmatter. Just headings, text, and code blocks.
Here's what the source for a typical post looks like:
#+beginsrc org
Introduction
Some text here…
A Subsection
More text with a code snippet inline.
def hello():
print("hello world")
#+endsrc
That's it. No build config, no plugins to install. Just org mode.
Building with Emacs
The build step is literally calling Emacs in batch mode. No daemon, no interactive session - just fire up Emacs, convert the file, and exit.
emacs --batch "post.org" \ --eval "(setq org-export-with-toc nil)" \ --eval "(setq org-html-postamble nil)" \ --eval "(setq org-export-with-section-numbers nil)" \ --funcall org-html-export-to-html
This disables the table of contents (I don't need it for short posts), removes the default Emacs postamble footer, and turns off section numbering. The output is clean HTML that I can style with a single CSS file.
The nice thing about this approach is that it runs anywhere Emacs runs. My laptop, a CI server, a Raspberry Pi - doesn't matter. No Node.js, no Ruby, no Python dependencies. Just Emacs.
The Run Script
I use a pattern I picked up somewhere (probably a Nick Janetakis video) where you have a single run script at the root of your project that acts as a task runner. It's just a bash script with functions.
#!/usr/bin/env bash
set -o errexit
set -o pipefail
function _build {
echo "Building site..."
rm -rf docs && mkdir -p docs
# Copy static assets
rsync -a --exclude='*.org' --exclude='run' ./ docs/
# Convert org files to HTML
find . -name "*.org" -print | while read -r org_file; do
emacs --batch "${org_file}" \
--eval "(setq org-export-with-toc nil)" \
--funcall org-html-export-to-html
mv "${org_file%.org}.html" docs/
done
}
function build_docker {
_build
docker buildx build --platform linux/amd64 -t mysite:latest .
}
function run_site_local {
docker run --rm -p 8080:8080 mysite:latest
}
# ... more functions
"${@:-help}"
The magic is in the last line. ${@:-help} means "use whatever arguments were passed, or default to 'help'". So ./run build_docker calls the build_docker function, and ./run with no arguments shows available commands.
This pattern is great because:
- It works identically on my laptop and in CI
- New contributors can just read the script to understand what's available
- You can compose commands (
build_dockercalls_buildinternally) - No external task runner to install
┌─────────────────────────────────────────────────────────┐ │ ./run script │ ├─────────────────────────────────────────────────────────┤ │ │ │ ./run _build → Build HTML from org files │ │ ./run build_docker → Build site + Docker image │ │ ./run run_site_local → Run container locally │ │ ./run tag_site → Tag for GCP registry │ │ ./run deploy → Push + deploy to Cloud Run │ │ ./run ship → Do everything │ │ │ └─────────────────────────────────────────────────────────┘
Docker
The Docker setup is about as minimal as it gets:
FROM nginx:alpine COPY ./docs /usr/share/nginx/html EXPOSE 8080 RUN sed -i 's/listen 80;/listen 8080;/' /etc/nginx/conf.d/default.conf
Four lines. Copy the built HTML into nginx's default directory, expose port 8080 (Cloud Run requirement), and tweak the config to listen on that port instead of 80.
The nginx:alpine image is tiny - about 40MB. The whole site, images included, adds maybe another 10MB. Fast to build, fast to push, fast to start.
The CI/CD Pipeline
GitHub Actions handles the automation. I have two workflows:
Build on PR
When I open a pull request, it builds the site and Docker image to verify nothing is broken. No deployment, just validation.
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Emacs
run: sudo apt-get install -y emacs
- name: Build Docker image
run: |
./run _build
docker buildx build --platform linux/amd64 -t mysite:latest .
Build and Deploy on Push to Master
When I merge to master, the full pipeline runs: build the site, build the Docker image, push it to GCP Artifact Registry, and deploy to Cloud Run.
on:
push:
branches: [master]
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Emacs
run: sudo apt-get install -y emacs
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
- name: Build and push
run: |
./run _build
docker buildx build --platform linux/amd64 -t mysite:latest .
./run tag_site
docker push $FULL_IMAGE_PATH
- name: Deploy to Cloud Run
run: ./run deploy
The key insight is that the same run script commands work in CI as they do locally. The CI workflow is just calling functions I've already written and tested on my machine.
GCP Infrastructure
The hosting uses two GCP services:
Artifact Registry
This is where Docker images live. It's basically GCP's version of Docker Hub, but private and integrated with their other services.
# Tag the image for the registry docker tag mysite:latest \ europe-west1-docker.pkg.dev/my-project/my-repo/mysite:latest # Push it docker push europe-west1-docker.pkg.dev/my-project/my-repo/mysite:latest
Cloud Run
Cloud Run takes a Docker image and runs it as a web service. The clever bit is that it can scale to zero - if nobody is visiting your site, you're not paying for compute.
gcloud run deploy mysite \ --image $FULL_IMAGE_PATH \ --region europe-west1 \ --allow-unauthenticated \ --min-instances 0 \ --max-instances 3 \ --memory 128Mi
When you push a new image, Cloud Run automatically picks it up and starts serving it. No manual restarts, no blue-green deployment scripts - it just works.
For a personal blog that gets maybe a few hundred visitors a month, this costs essentially nothing. The free tier covers most of it, and even when I go over, we're talking pennies.
How It All Fits Together
Here's the full flow:
THE BUILD PIPELINE ┌──────────────┐ │ Write post │ Write in org mode (post.org) │ in Emacs │ └──────┬───────┘ │ ▼ ┌──────────────┐ │ git push │ Push to GitHub (master branch) └──────┬───────┘ │ ▼ ┌──────────────────────────────────────────────────────┐ │ GitHub Actions │ │ │ │ 1. Install Emacs │ │ 2. Run ./run _build (org → HTML) │ │ 3. Build Docker image │ │ 4. Push to Artifact Registry │ │ 5. Deploy to Cloud Run │ │ │ └──────────────────────┬───────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────┐ │ Google Cloud Platform │ │ │ │ ┌─────────────────┐ ┌─────────────────┐ │ │ │ Artifact │ ───▶ │ Cloud Run │ │ │ │ Registry │ │ │ │ │ │ │ │ Auto-deploys │ │ │ │ (Docker images) │ │ new images │ │ │ └─────────────────┘ └────────┬────────┘ │ │ │ │ └────────────────────────────────────┼───────────────┘ │ ▼ ┌─────────────────┐ │ adamfallon.com │ │ │ │ (scales 0-3) │ └─────────────────┘
Local Development
For local development, I just run the Docker container:
./run build-and-run-local
This builds the site, creates a Docker image, and starts nginx on port 8080. Open http://localhost:8080 and I can preview exactly what will be deployed.
Sometimes I skip the Docker step entirely and just open the HTML file directly in a browser. Since there's no server-side rendering, no JavaScript framework, no API calls - it's just static HTML - this works fine for most things.
What I Like About This Setup
It's boring technology
Emacs has been around since the 1970s. Nginx since 2004. Docker since 2013. None of this is cutting edge, and that's the point. I don't want to spend my weekends debugging build tool updates.
Everything is in the repo
The entire build process is captured in the run script and the GitHub Actions workflows. If I need to rebuild this in five years, I can read those files and know exactly what to do.
It scales to zero
Cloud Run with min-instances 0 means I'm not paying for compute when nobody is visiting. For a personal site, this matters. I've had months where the hosting cost was literally zero.
It's fast
No JavaScript framework, no hydration, no client-side routing. Just HTML and CSS. Pages load in under a second even on slow connections.
What I'd Change
The org-to-HTML export could be prettier out of the box. I ended up writing a fair bit of CSS to make it look decent. There are org-mode themes and export packages that might help, but I haven't explored them much.
I also wish the build was faster. Emacs startup time isn't negligible, and running it in batch mode for each file adds up. For a site with hundreds of posts this might become annoying, but for now it's fine.
Recap
The whole setup is:
- Write posts in org mode
- Use Emacs batch mode to convert to HTML
- Package it in a tiny nginx Docker container
- Use GitHub Actions to build and push on merge
- Let Cloud Run handle hosting with automatic scaling
Is this the right approach for everyone? Probably not. If you want comments, a newsletter, analytics, or any kind of dynamic content, you'd need to bolt on additional services.
But for a simple blog where I just want to write things and have them appear on the internet - this works perfectly. And I understand every piece of it, which means when something breaks, I can fix it.
Sometimes boring is exactly what you need.
If you enjoyed this, you might also like my post on setting up Docker email notifications which uses a similar approach with Docker and bash scripts.