Engineering
How FreeUptimeMonitoring is built, and why it's fast.
Overview
I built FreeUptimeMonitoring on Rails 8 with SQLite as the sole database engine. No Postgres, no MySQL, no Redis. Every piece of persistent state—user accounts, monitor configurations, check results, job queues—lives in SQLite files on disk.
My interest in scalability and performance comes from over twenty years of software engineering. Most recently, I managed performance, databases, and infrastructure at Gumroad—a multi-terabyte Postgres database, billions of rows, thousands of queries per second. That experience taught me that most scaling pain comes from architectural choices, not from raw scale itself. The right data model and the right storage engine eliminate entire categories of problems before they ever appear.
SQLite runs in-process—no network round-trips to a database server. Reads are measured in microseconds. For a workload like uptime monitoring—lots of small reads and writes, scoped to individual accounts—it's an ideal fit. The architecture is database-per-tenant: each account gets its own SQLite file. Combined with geographic node placement, both reads and writes happen on local disk, on a server close to the user.
Database-per-tenant
When you create an account on FreeUptimeMonitoring, I create a new SQLite database file on disk just for you. Your monitors, checks, notifications—everything lives in that one file, completely isolated from every other account.
This gives you:
- Total data isolation. There's no row-level security to misconfigure. Your data physically cannot leak into another account's queries because it's in a different file.
- No noisy neighbors. A heavy query on one account doesn't lock or slow down another. Each database has its own WAL (write-ahead log), its own locks, its own I/O.
- Trivial portability. An account's entire dataset is a single file. Backing up, restoring, or migrating a tenant is just copying a file.
- Independent scaling. I can move any tenant's database to a different server by copying one file and updating a DNS record.
Geographic node placement
When you sign up, I assign your account to the node server closest to you based on your timezone. Your tenant database lives on that node. This means both reads and writes are local—hitting disk on the same machine that's serving your request.
This is a meaningful difference from how most multi-tenant SaaS platforms work. The typical setup is a single primary database (often in us-east-1) with read replicas near users. Reads are fast, but every write—creating a monitor, updating a setting, recording a check result—has to travel back to the primary, adding tens of milliseconds of latency each way.
With database-per-tenant on SQLite, there's no distinction between reads and writes. Both are local disk operations on the nearest server. A write in Frankfurt stays in Frankfurt. A write in Oregon stays in Oregon.
Shared vs. tenant databases
Not everything is per-tenant. The system uses two types of databases:
- Shared database — stores accounts, users, memberships, invitations, and the node server registry. This is a single SQLite database replicated to all nodes via LiteFS. Writes go to the hub server; nodes receive near-instant read replicas.
- Tenant databases — store monitors, checks, and notifications. Each tenant gets its own SQLite file that lives on its assigned node. Not replicated—entirely local.
This separation is enforced at the model layer. Shared models inherit from SharedRecord, tenant models from ApplicationRecord. Controllers are similarly split: controllers handling shared resources extend UntenantedController and never touch tenant data. This strict boundary means the code works correctly regardless of deployment topology—a node server that only has tenant databases will never accidentally try to write to the shared database.
Schema-based provisioning
When a new tenant database is created, it's initialized from a schema snapshot (tenants_schema.rb), not by replaying the full migration history. This means provisioning is instant—a single schema load, regardless of how many migrations have accumulated over the lifetime of the application.
The schema file is a Ruby DSL that describes the current state of every tenant table: columns, indexes, constraints. It's regenerated automatically every time a migration runs. New tenants always start with a fully up-to-date database.
Lazy async migrations
This is the key innovation that makes database-per-tenant practical at scale.
When I deploy a new version that includes a database migration, I don't migrate all tenant databases upfront. If there are a million tenants, migrating them all before the deploy completes would take too long and risk downtime.
Instead, migrations happen lazily:
- Middleware detects pending migrations. On each request, lightweight middleware checks whether the current tenant's database needs migrating. An LRU cache (holding up to 10,000 tenants) ensures this check is near-zero-cost for recently verified tenants.
- A background job runs the migration. If pending migrations are found, a job is enqueued to run them. Concurrency is limited to one migration per tenant to prevent conflicts.
- A quick-check loop avoids the wait page. After enqueuing the migration job, the middleware waits up to one second (polling every 50ms) for the migration to complete. Most simple migrations finish within this window, and the user never notices.
- A maintenance page is shown if needed. If the migration takes longer than one second, the user sees a brief "Quick maintenance ongoing" page that auto-refreshes when the database is ready. Typical wait time is under a few seconds.
- Background jobs are intercepted too. The same detection runs around every background job. If a job's tenant has pending migrations, the migration job is enqueued first and the original job is rescheduled with a short delay.
The result: deploying a migration that adds a column doesn't require touching any tenant database at deploy time. Migrations happen organically as tenants make requests. Inactive tenants—accounts that haven't been accessed in weeks or months—are migrated the next time their owner visits. No wasted work, no deployment bottleneck.
Why this matters
The combination of these techniques—SQLite, database-per-tenant, geographic placement, schema provisioning, lazy migrations—gives FreeUptimeMonitoring properties that are unusual for a multi-tenant SaaS:
- Deploy in seconds, not hours. No matter how many tenants exist, deploys only need to update the application code and shared database. Tenant migrations happen in the background.
- Consistent low latency. Every database operation is a local disk I/O on the nearest server. No network hops, no connection pools, no replication lag on writes.
- True isolation. A misbehaving query on one account cannot affect another. Databases can be individually backed up, restored, or moved.
- Operational simplicity. No database servers to manage, no connection strings to configure, no clustering to set up. The database is a file. The backup is a copy. The migration is a background job.