Use this file to discover all available pages before exploring further.
Before beginning your migration, we recommend running the PlanetScale Discovery Tool on your database to assess compatibility and ease of migration.
Write-Ahead Logging (WAL) streaming provides a near-zero downtime migration method by continuously replicating changes from your source PostgreSQL database to PlanetScale Postgres using logical replication.
Want expert guidance for your migration? PlanetScale’s migration services are available to help you plan and execute a smooth, successful move.
PostgreSQL 10+ on the source database (logical replication support)
Administrative access to source database configuration
Network connectivity between source and PlanetScale Postgres
Connection details for your PlanetScale Postgres database from the console
Ensure the disk on your PlanetScale database has at least 150% of the capacity of your source database.
If you are migrating to a PlanetScale database backed by network-attached storage, you can resize your disk manually by setting the “Minimum disk size.”
If you are using Metal, you will need to select a size when first creating your database.
For example, if your source database is 330GB, you should have at least 500GB of storage available on PlanetScale.
Understanding of your application’s write patterns for cutover planning
Connect to your source database and create a replication user:
-- Create replication userCREATE USER replication_user WITH REPLICATION LOGIN;-- Grant necessary permissionsGRANT CONNECT ON DATABASE your_database TO replication_user;GRANT USAGE ON SCHEMA public TO replication_user;GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user;-- Grant permissions for future tablesALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user;
Create a publication for the tables you want to replicate:
-- Create publication for all tablesCREATE PUBLICATION planetscale_migration FOR ALL TABLES;-- Or create publication for specific tablesCREATE PUBLICATION planetscale_migration FOR TABLE table1, table2, table3;-- Verify publicationSELECT * FROM pg_publication;
Step 6: Create replication slot and capture snapshot
Before exporting data, create the replication slot on the source database. This captures a consistent snapshot you will use for export so the dump and replication start point stay aligned with no gaps or duplicates.
-- Run on source database in a dedicated session-- This creates the slot and returns a snapshot nameSELECT slot_name, snapshot_name, confirmed_flush_lsnFROM pg_create_logical_replication_slot('planetscale_slot', 'pgoutput');
Record the snapshot_name value from the output and use it in the next step.
Keep the source database session open after running this command. The snapshot is valid only while the creating session remains open. If the session closes, drop the slot and repeat this step to get a new snapshot.
Create the subscription on PlanetScale referencing the existing slot from Step 6:
Always include copy_data = false when creating a subscription after a manual data import. Without it, PostgreSQL attempts to copy table data again and can trigger duplicate key errors.
-- Create subscription referencing the pre-created slotCREATE SUBSCRIPTION planetscale_subscription CONNECTION 'host=source-host port=source-port dbname=source-database user=replication_user password=replication-password' PUBLICATION planetscale_migration WITH ( copy_data = false, create_slot = false, slot_name = 'planetscale_slot' );-- Verify subscription is activeSELECT subname, subenabled, subslotname FROM pg_subscription;-- Confirm replication is streamingSELECT subname, received_lsn, latest_end_lsn, latest_end_timeFROM pg_stat_subscription;
-- On source databaseSELECT * FROM pg_replication_slots;-- On PlanetScale databaseSELECT subname, received_lsn, latest_end_lsn, latest_end_timeFROM pg_stat_subscription;
-- Compare row counts between source and target-- Run on both databasesSELECT schemaname, tablename, n_tup_ins as estimated_rowsFROM pg_stat_user_tablesORDER BY schemaname, tablename;
Logical replication synchronizes table data, but it does not synchronize the nextval values for sequences in your database. Sequences are often used for auto-incrementing IDs, so update them on PlanetScale before switching application traffic.You can see all sequences and their corresponding nextval values on your source database using this command:
SELECT schemaname, sequencename, last_value + increment_by AS next_value FROM pg_sequences;
Example output:
schemaname | sequencename | next_value------------+------------------+------------ public | users_id_seq | 105 public | posts_id_seq | 1417 public | followers_id_seq | 3014
In this example, the database has three sequences used for auto-incrementing primary keys. The nextval for users_id_seq is 105, the nextval for posts_id_seq is 1417, and the nextval for followers_id_seq is 3014. If you run the same query on your PlanetScale database, the sequence values may be behind the source database.If you switch traffic to PlanetScale while sequences are behind, inserts can fail with duplicate key errors:
Before switching over, advance these sequences so the nextval values produced on PlanetScale are greater than any values previously produced on the source database. A simple approach is to run this query on your source database:
Execute the generated queries on your PlanetScale database. Advance each sequence far enough that the source database will not reach those nextval values before you switch your primary database to PlanetScale. For tables with a high insertion rate, increase the offset to a larger value, such as 100,000 or 1,000,000.
-- Check existing slotsSELECT * FROM pg_replication_slots;-- Drop unused slotsSELECT pg_drop_replication_slot('slot_name');
Permission errors:
Verify replication user has correct permissions
Check pg_hba.conf configuration
Ensure network connectivity
Large transaction delays:
Monitor for long-running transactions on source
Consider breaking large operations into smaller batches
Subscription conflicts:
-- Check subscription worker statusSELECT * FROM pg_stat_subscription;-- Restart subscription if neededALTER SUBSCRIPTION planetscale_subscription DISABLE;ALTER SUBSCRIPTION planetscale_subscription ENABLE;
Duplicate key errors immediately after subscription creation:The subscription was likely created without copy_data = false after manual data import. Drop the subscription immediately to stop retry loops:
DROP SUBSCRIPTION planetscale_subscription;
Clean up leaked replication origins:
DO $$DECLARE r RECORD;BEGIN FOR r IN SELECT roname FROM pg_replication_origin WHERE roname NOT IN ( SELECT 'pg_' || oid::text FROM pg_subscription ) LOOP PERFORM pg_replication_origin_drop(r.roname); END LOOP;END;$$;
Then drop the replication slot on the source, repeat Steps 6-8 with a fresh snapshot, and recreate the subscription with copy_data = false.could not find free replication state slot errors:This is commonly caused by the duplicate key crash loop above. Each unclean worker exit can leak a replication origin until max_active_replication_origins is exhausted. Resolve the duplicate key issue first and run the origin cleanup. If the error persists, contact PlanetScale support.
Set up monitoring and alerting for the new database
4
Plan for ongoing maintenance and optimization
For simpler migrations or if you don’t have administrative access to your source database, consider the pg_dump/restore method. For more complex scenarios, explore Amazon DMS.If you encounter issues during migration, please reach out to support for assistance.