Bubbles, the fleet that hosts your site.
130 plus production sites run on Bubbles infrastructure. Debian Linux, Nginx, FastAPI sidecars, BIND9 DNS, Tailscale mesh, plus Apple Silicon nodes for AI inference. The same engineer who builds your site operates the infrastructure under it.
Three nodes, one mesh.
The fleet is intentionally modest. Three production nodes plus an Apple Silicon research node. The aim is reliability, not scale; if a small business client needs hyperscale we deploy on Cloud Run or Cloudflare Workers, but most production small business sites do not.
- Bubbles. Debian amd64, 16GB RAM, public web host, Nginx for many vhosts under /var/www/. Public IP routed. Source of truth for static content. Brain binary for federated AI.
- VALKYRIE. macOS M1, 8GB RAM, brain port 9999, AI ML seed corpus (lilianweng, colah, distill). Runs as launchd service.
- M2. macOS M2, 8GB RAM, brain port 9999, code seed corpus (go.dev, realpython, MDN). Hosts BlenderMind, FrankMind, megamind sub brains on ports 8895, 8896, 9999.
- Thunderport. Standalone Apple Silicon node hosting MADDIE on port 8893. W_know weights file at 512MB with 459K plus non zero weights.
What runs on every node.
Web layer
App layer
DNS layer
Mesh layer
OS
Storage
AI inference
Cloud LLMs
Reproducible, declared, public.
The default stack is documented at /default-stack.toml on every site. The same configuration that ran the original deploy is the configuration the engineer touches a year later when answering a maintenance email.
[infrastructure] host = "Bubbles" ip_public = "169.155.162.118" location = "Cassville, MO" os = "Debian 12 amd64" ram_gb = 16 nginx = "yes" ssl = "Lets Encrypt managed by Certbot" [mesh] tailscale = "yes" nats = "nats://127.0.0.1:4222" nats_max_payload_mb = 8 [ai] megamind_node_count = 4 inference_local = "Apple Silicon" inference_cloud = "Anthropic Claude, OpenAI" private_compliance = "BAA available, NIST 800-53 friendly"
The studio runs the infrastructure.
Most studios hand client sites to Vercel, Netlify, or AWS and forget about them. We run the infrastructure ourselves because the same engineer who knows the application stack also knows where the application lives in production. When a site has a problem, the same person who built the code also has SSH access to the server and the time to fix it. That is rare and we think it matters.
There are trade offs. Hyperscale traffic spikes are not the studio strength. Most small business sites do not need hyperscale; they need reliability plus competence plus accountability, and we ship that.
Sources: Tailscale, NATS messaging, Nginx, Debian.