Firebase is a great starting point. It lets small teams ship fast with minimal backend code, and we do not blame anyone for choosing it. But as products grow, the constraints become real. Firestore's query limitations force data denormalization that makes every new feature harder to build. Pricing becomes unpredictable because reads and writes cost money and usage spikes hit the bill before you can react. Vendor lock in deepens as you spread business logic across Cloud Functions, Firestore rules, and Firebase extensions. At some point, the cost of staying on Firebase exceeds the cost of migrating off it.
We have helped multiple teams make this move. Supabase is the most natural migration target because it provides direct equivalents for every Firebase service while running on PostgreSQL, giving you the relational database capabilities that Firestore fundamentally lacks. We wrote a detailed Supabase vs Firebase comparison that covers the full feature comparison. This post focuses on the practical migration process.
Planning the Migration
Do not attempt a big bang migration. Plan for a phased approach where Firebase and Supabase run in parallel during the transition. The typical order is:
1. Database: Migrate Firestore data to PostgreSQL
2. Authentication: Migrate Firebase Auth users to Supabase Auth
3. Storage: Migrate Firebase Storage files to Supabase Storage
4. Functions: Migrate Cloud Functions to Supabase Edge Functions
5. Real time: Replace Firestore listeners with Supabase Realtime
6. Cutover: Switch DNS, retire Firebase project
Each phase should be independently deployable. Your application should be able to read from both systems during the transition. Allocate 4 to 8 weeks for a typical migration depending on data volume and feature complexity. We have done this in as little as 2 weeks for smaller apps and as long as 3 months for products with complex Firestore data models.
Firestore to PostgreSQL
This is the hardest part. Firestore is a document database with nested subcollections. PostgreSQL is a relational database with tables, rows, and foreign keys. You cannot map one to the other mechanically. You need to redesign your data model.
Start by exporting your Firestore data. Use the Firebase Admin SDK to walk through each collection and subcollection, writing documents to JSON files. Do not use the Firebase export tool to GCS unless you want to deal with the protobuf format it produces. A simple Node.js script that iterates collections and writes JSON is cleaner and gives you control over batching.
Then design your PostgreSQL schema. The general rules are:
- Top level collections become tables. A `users` collection becomes a `users` table.
- Subcollections become related tables with foreign keys. `users/{id}/orders` becomes an `orders` table with a `user_id` column.
- Denormalized data gets normalized. If you stored the user name on every order document (because Firestore cannot join), remove it from orders and rely on a JOIN.
- Nested objects can stay as JSONB columns if they vary in structure, or get promoted to proper columns if they are consistent.
Write your schema carefully. This is your chance to fix every data modeling shortcut Firebase forced you into. We cover relational schema best practices in our database schema design guide, and every principle there applies directly.
Once the schema is ready, write a migration script that reads the exported JSON and inserts into PostgreSQL. Use batch inserts (1,000 rows at a time) and handle Firestore's loose typing (a field that is sometimes a string and sometimes a number needs type coercion). Run this against a staging Supabase instance first. Validate row counts, check for null values where you expected data, and run your application's most critical queries against the new schema.
Firebase Auth to Supabase Auth
Supabase Auth supports importing Firebase Auth users with their password hashes. Firebase uses a scrypt based hash, and Supabase Auth can verify these hashes natively if you provide the hash parameters from your Firebase project.
Export your users using the Firebase CLI: `firebase auth:export users.json --format=json`. This gives you email, display name, password hash, salt, and provider data. Then use the Supabase Admin API to import users in batches. The key fields to map are:
- email maps directly
- passwordHash and salt import into Supabase Auth's password system
- providerData (Google, Apple, etc.) maps to Supabase Auth's identity linking
After import, users can log in with their existing passwords. No password reset required. Social login providers (Google, Apple, GitHub) need to be configured in Supabase with the same OAuth credentials so existing tokens resolve correctly.
The trickiest part is custom claims stored in Firebase Auth tokens. If you use custom claims for role based access, migrate these to Supabase's `raw_app_meta_data` or `raw_user_meta_data` columns on the auth.users table, then reference them in your Row Level Security policies. This is actually an upgrade: Firebase custom claims are limited to 1,000 bytes and require a token refresh to take effect. Supabase metadata is checked at query time with no size limit.
Cloud Functions to Edge Functions
Firebase Cloud Functions run on Node.js in Google Cloud. Supabase Edge Functions run on Deno. This means you cannot copy paste your function code. You need to rewrite it, but the rewrites are usually simpler than the originals because Supabase's architecture eliminates much of the glue code.
Common patterns and their Supabase equivalents:
- HTTP triggered functions become Supabase Edge Functions with the same REST endpoints
- Firestore triggers (onCreate, onUpdate, onDelete) become PostgreSQL triggers or Supabase Realtime combined with database webhooks
- Auth triggers (onCreate, onDelete) become Supabase Auth hooks
- Scheduled functions become pg_cron jobs or Edge Functions triggered by cron via an external scheduler
The biggest win is that business logic that lived in Cloud Functions because Firestore rules could not express it often moves directly into Row Level Security policies or PostgreSQL functions. A Cloud Function that checked three conditions before allowing a write becomes a two line RLS policy. This reduces latency, eliminates function cold starts, and makes the security model auditable.
Storage Migration
Firebase Storage and Supabase Storage are both S3 compatible under the hood. Migration is straightforward:
1. Download all files from Firebase Storage using the Admin SDK (iterate the bucket, download each file)
2. Upload to Supabase Storage using the Supabase client library or direct S3 compatible API
3. Update all file URL references in your database
If you stored file paths in Firestore documents (e.g., a user's profile_image_url), update those references as part of the database migration. Use a consistent path mapping: `gs://firebase-bucket/users/123/avatar.jpg` becomes `users/123/avatar.jpg` in a Supabase storage bucket.
Supabase Storage policies work like RLS. Define who can read and write to each path pattern using SQL policies on the storage.objects table. This replaces Firebase Storage rules with the same PostgreSQL based security model used everywhere else.
Real Time Migration
Firestore's real time listeners are often the stickiest part of a migration because they are deeply embedded in client code. Supabase Realtime provides equivalent functionality with three channels: database changes (listen for INSERT, UPDATE, DELETE on specific tables), broadcast (publish/subscribe between clients), and presence (track who is online).
The migration pattern is:
- Replace `onSnapshot` listeners with Supabase `channel.on('postgres_changes', ...)` subscriptions
- Replace Firestore document writes that trigger UI updates with standard INSERT/UPDATE queries (Supabase Realtime picks up the changes automatically)
- If you used Firestore for presence (writing a "status" document when users come online), switch to Supabase Presence which is purpose built for this
Supabase Realtime uses WebSockets instead of Firestore's long polling approach, which typically results in lower latency for change propagation. We covered the architecture behind this in our real time architecture guide.
Avoiding Downtime
The critical question is how to switch without downtime. The pattern we use is dual write during transition. After migrating historical data, your application writes to both Firebase and Supabase for a defined period (usually 1 to 2 weeks). Reads gradually shift to Supabase. Once you confirm data consistency between both systems, stop writing to Firebase and complete the cutover.
Use feature flags to control which system each user reads from. Start with internal users, expand to 10% of traffic, then 50%, then 100%. This is the same gradual rollout strategy we use for all major infrastructure changes, as we discussed in our guide on zero downtime deployments.
After Migration
Once the migration is complete, take time to optimize. PostgreSQL gives you capabilities Firestore never had: JOINs, aggregations, window functions, full text search, and complex queries. Features that required workarounds or denormalization in Firestore become straightforward SQL queries. This is the payoff: your team moves faster on every feature going forward.
If you are considering migrating from Firebase to Supabase and want a team that has done it before, reach out to us. We will assess your current Firebase usage, plan the migration phases, and execute it without disrupting your users.