Infrastructure / Bubbles fleet

Bubbles, the fleet that hosts your site.

130 plus production sites run on Bubbles infrastructure. Debian Linux, Nginx, FastAPI sidecars, BIND9 DNS, Tailscale mesh, plus Apple Silicon nodes for AI inference. The same engineer who builds your site operates the infrastructure under it.

01The fleet

Three nodes, one mesh.

Bubbles is a three node LAN cluster: Bubbles (the public web host), VALKYRIE, and M2 (Apple Silicon nodes for AI inference). All three run the same MADDIE binary and share state via NATS. Plus Thunderport, an additional Apple Silicon node hosting MADDIE standalone.

The fleet is intentionally modest. Three production nodes plus an Apple Silicon research node. The aim is reliability, not scale; if a small business client needs hyperscale we deploy on Cloud Run or Cloudflare Workers, but most production small business sites do not.

  • Bubbles. Debian amd64, 16GB RAM, public web host, Nginx for many vhosts under /var/www/. Public IP routed. Source of truth for static content. Brain binary for federated AI.
  • VALKYRIE. macOS M1, 8GB RAM, brain port 9999, AI ML seed corpus (lilianweng, colah, distill). Runs as launchd service.
  • M2. macOS M2, 8GB RAM, brain port 9999, code seed corpus (go.dev, realpython, MDN). Hosts BlenderMind, FrankMind, megamind sub brains on ports 8895, 8896, 9999.
  • Thunderport. Standalone Apple Silicon node hosting MADDIE on port 8893. W_know weights file at 512MB with 459K plus non zero weights.
02The stack

What runs on every node.

Web layer

public surface
NginxLets Encrypt SSLHTTP 2gzip

App layer

backends
FastAPIGo MADDIESQLitePostgreSQL

DNS layer

name resolution
BIND9 authoritativens1.thatwebhostingguy.comns2.thatwebhostingguy.com

Mesh layer

private network
TailscaleNATS messagingSSH key auth

OS

platform
Debian 12macOS Sonoma

Storage

data
SSD primarySMR external 4.5TBoff site backup

AI inference

on premises
Apple Siliconllama.cppMADDIE federated

Cloud LLMs

where remote
Anthropic ClaudeOpenAI
03Default stack TOML

Reproducible, declared, public.

The default stack is documented at /default-stack.toml on every site. The same configuration that ran the original deploy is the configuration the engineer touches a year later when answering a maintenance email.

[infrastructure]
host = "Bubbles"
ip_public = "169.155.162.118"
location = "Cassville, MO"
os = "Debian 12 amd64"
ram_gb = 16
nginx = "yes"
ssl = "Lets Encrypt managed by Certbot"

[mesh]
tailscale = "yes"
nats = "nats://127.0.0.1:4222"
nats_max_payload_mb = 8

[ai]
megamind_node_count = 4
inference_local = "Apple Silicon"
inference_cloud = "Anthropic Claude, OpenAI"
private_compliance = "BAA available, NIST 800-53 friendly"
04Why hand operated

The studio runs the infrastructure.

Most studios hand client sites to Vercel, Netlify, or AWS and forget about them. We run the infrastructure ourselves because the same engineer who knows the application stack also knows where the application lives in production. When a site has a problem, the same person who built the code also has SSH access to the server and the time to fix it. That is rare and we think it matters.

There are trade offs. Hyperscale traffic spikes are not the studio strength. Most small business sites do not need hyperscale; they need reliability plus competence plus accountability, and we ship that.

Sources: Tailscale, NATS messaging, Nginx, Debian.