Guide

How to Back Up SQLite in Production

Borela field notes / 7 min read

A practical SQLite backup guide for production apps: WAL mode, continuous replication, off-server storage, restore drills, alerts, and retention.

The uncomfortable part of SQLite backups

SQLite makes small production apps wonderfully simple. One database file, no separate server, fewer moving parts, and a deployment story that fits on a single VPS. The backup story can be just as simple, but only if you respect how SQLite actually writes data.

The mistake I see most often is treating app.db like a static file. A cron job copies it at midnight, rsync ships it somewhere else, and everyone feels covered. That works right up until the app is using WAL mode, writes are happening during the copy, or the only backup you can find was never restored on a clean machine.

A real production SQLite backup plan has two jobs. It needs to keep a recent copy of the database somewhere outside the app server, and it needs to prove that copy can become a working database again.

Start with the failure you are willing to survive

Before choosing tools, choose the recovery promise. If the VPS disappears at 10:17, how much data can the business lose? Five seconds? Five minutes? One day? If the answer is one day, a nightly copy might be acceptable. If users are paying you or creating data all day, it usually is not.

For most indie SaaS and internal tools, the practical target is continuous replication with a stale-backup alert. You do not need a giant database platform. You do need to know when the latest backup is no longer recent.

Minimum viable production setup

Use WAL mode so readers and writers can coexist. Set a busy timeout so normal traffic has a chance to wait instead of failing immediately. Then stream changes continuously to object storage using a SQLite-aware tool such as Litestream or a managed service built around it.

Keep the replica outside the app server. A second directory on the same machine is useful for testing, but it is not disaster recovery. If the disk dies, the backup dies with it.

PRAGMA journal_mode=WAL;
PRAGMA busy_timeout=5000;

Do not stop at replication

Replication tells you bytes are leaving the machine. It does not tell you that your restore command still works, that your object-storage credentials still have read access, or that the restored database passes integrity checks.

The boring but serious version is a scheduled restore drill. Restore the latest backup into a temporary path, open it read-only, run integrity_check, count the tables that matter, record the result, and alert on failure. That is the difference between a backup system and backup hope.

borela-agent restore -project app -output /tmp/app-restored.db
sqlite3 /tmp/app-restored.db 'PRAGMA integrity_check;'
sqlite3 /tmp/app-restored.db '.tables'

Retention should match human mistakes

Server loss is not the only reason you restore. Someone can delete the wrong tenant, run a bad migration, or ship code that corrupts data slowly. Those mistakes are often discovered hours or days later.

Seven days of point-in-time recovery is a reasonable default for small apps. It is long enough to recover from most weekend discoveries without turning storage into an unbounded liability. If your users notice mistakes weeks later, extend retention deliberately and test that longer restore window.

What Borela automates

Borela runs next to the app, streams SQLite changes through Litestream, records backup freshness, restores managed backups in isolation, runs integrity checks, records table and row counts, and emails when the proof fails.

You can build all of that yourself, and many teams should. Borela exists for the solo operator who wants SQLite's simplicity without also owning the weekly restore checklist.

A backup you have never restored is still a guess.

Let Borela run the restore drill every week.