Skip to main content
Amazon Database Migration Service (DMS) provides a managed migration service that can handle complex database migrations with built-in monitoring, error handling, and data validation. This method is ideal for large, complex databases that require robust migration capabilities.

Overview

This migration method involves:
1

Pre-migration schema setup (strongly recommended for production)
2

Setting up AWS DMS replication instance and endpoints
3

Configuring source and target database connections
4

Creating and running migration tasks with full load and CDC
5

Monitoring migration progress and performing cutover
Critical: AWS DMS Schema Object LimitationsAWS DMS only migrates table data and primary keys. All other PostgreSQL schema objects must be handled separately:
  • Secondary indexes
  • Sequences and their current values
  • Views, functions, and stored procedures
  • Constraints (foreign keys, unique, check)
  • Triggers and custom data types
Deploy your complete schema to PlanetScale BEFORE starting DMS migration to preserve performance and avoid application errors.
NoteThis method requires an AWS account and will incur AWS DMS charges. Review AWS DMS pricing before proceeding.
NoteFor Aurora users: Consider the Aurora to PlanetScale CloudFormation & DMS tutorial for a fully automated approach using CloudFormation templates and Step Functions workflows instead of manual DMS setup.

Prerequisites

Before starting the migration:
  • Active AWS account with appropriate DMS permissions
  • Source PostgreSQL database accessible from AWS (consider VPC configuration)
  • Connection details for your PlanetScale for Postgres database from the console
  • Ensure the disk on your PlanetScale database has at least 150% of the capacity of your source database. If you are migrating to a PlanetScale database backed by network-attached storage, you can resize your disk manually by setting the “Minimum disk size.” If you are using Metal, you will need to select a size when first creating your database. For example, if your source database is 330GB, you should have at least 500GB of storage available on PlanetScale.
  • Understanding of your data transformation requirements (if any)
  • Network connectivity between AWS and both source and target databases

Step 1: Pre-Migration Schema Setup

Deploy your complete schema to PlanetScale BEFORE starting DMS migration. This ensures optimal performance and prevents application errors.

Extract and Apply Schema

1

Extract your complete schema from the source PostgreSQL database:
pg_dump -h your-postgres-host -p 5432 -U username -d database \
        --schema-only --no-owner --no-privileges \
        --exclude-table-data='*' -f schema_objects.sql
2

Apply the schema to PlanetScale:
psql -h your-planetscale-host -p 5432 -U username -d database -f schema_objects.sql
Foreign Key ConstraintsIf the schema application fails due to foreign key constraint issues, you can temporarily remove them from the schema file and apply them after DMS completes the data migration.

Verify Schema Application

Quickly verify your schema was applied successfully:
-- Check that tables and sequences exist
SELECT
    (SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public') as tables,
    (SELECT COUNT(*) FROM information_schema.sequences WHERE sequence_schema = 'public') as sequences,
    (SELECT COUNT(*) FROM pg_indexes WHERE schemaname = 'public') as indexes;

Step 2: Set Up AWS DMS

Create DMS Replication Instance

1

Navigate to the AWS DMS Console
2

Click “Create replication instance”
3

Configure the instance:
Name: planetscale-postgres-migration
Description: Migration to PlanetScale for Postgres
Instance class: dms.t3.medium (adjust based on your needs)
Engine version: Latest available
VPC: Select appropriate VPC
Multi-AZ: No (for cost savings, Yes for production)
Publicly accessible: Yes (if needed for connectivity)

Configure Security Groups

Ensure your replication instance can connect to:
  • Source PostgreSQL database (port 5432)
  • PlanetScale for Postgres (port 5432)
  • Internet for PlanetScale connectivity

Step 3: Create Source Endpoint

Configure PostgreSQL source endpoint:

1

In DMS Console, go to “Endpoints” > “Create endpoint”
2

Configure source endpoint:
Endpoint type: Source endpoint
Endpoint identifier: postgres-source
Source engine: postgres
Server name: your-postgres-host
Port: 5432
Database name: your-database-name
Username: your-username
Password: your-password

Advanced settings for PostgreSQL:

Extra connection attributes:
pluginName=test_decoding;
slotName=dms_slot_planetscale;
captureDDLs=false;
maxFileSize=32768;

Step 4: Create Target Endpoint

Configure PlanetScale for Postgres target endpoint:

1

Create target endpoint with PlanetScale connection details:
Endpoint type: Target endpoint
Endpoint identifier: planetscale-target
Target engine: postgres
Server name: [from PlanetScale console]
Port: [from PlanetScale console]
Database name: [from PlanetScale console]
Username: [from PlanetScale console]
Password: [from PlanetScale console]

SSL Configuration:

SSL mode: require

Step 5: Test Endpoints

1

Select your source endpoint and click “Test connection”
2

Select your target endpoint and click “Test connection”
3

Ensure both tests pass before proceeding

Step 6: Create Migration Task

Configure the migration task:

1

Go to “Database migration tasks” > “Create task”
2

Configure task settings:
Task identifier: postgres-to-planetscale
Replication instance: planetscale-postgres-migration
Source database endpoint: postgres-source
Target database endpoint: planetscale-target
Migration type: Migrate existing data and replicate ongoing changes

Task Settings

Option 1: Schema-first approach (recommended for production):
{
  "TargetMetadata": {
    "TargetSchema": "",
    "SupportLobs": true,
    "FullLobMode": true,
    "LobChunkSize": 32,
    "LimitedSizeLobMode": false,
    "LobMaxSize": 0,
    "InlineLobMaxSize": 32,
    "BatchApplyEnabled": true,
    "TaskRecoveryTableEnabled": false
  },
  "FullLoadSettings": {
    "TargetTablePrepMode": "DO_NOTHING",
    "CreatePkAfterFullLoad": false,
    "StopTaskCachedChangesApplied": false,
    "MaxFullLoadSubTasks": 8,
    "TransactionConsistencyTimeout": 600,
    "CommitRate": 10000,
    "FullLoadIgnoreConflicts": true
  },
  "ValidationSettings": {
    "EnableValidation": true,
    "ValidationMode": "ROW_LEVEL",
    "ThreadCount": 5,
    "FailureMaxCount": 10000,
    "TableFailureMaxCount": 1000
  },
  "ChangeProcessingTuning": {
    "StatementCacheSize": 50,
    "CommitTimeout": 5,
    "BatchApplyPreserveTransaction": true,
    "BatchApplyTimeoutMin": 1,
    "BatchApplyTimeoutMax": 30,
    "MinTransactionSize": 5000,
    "MemoryKeepTime": 60,
    "BatchApplyMemoryLimit": 1000,
    "MemoryLimitTotal": 2048
  },
  "Logging": {
    "EnableLogging": true,
    "LogComponents": [
      {
        "Id": "TRANSFORMATION",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "SOURCE_UNLOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TARGET_LOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "SOURCE_CAPTURE",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TARGET_APPLY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      }
    ]
  },
  "ControlTablesSettings": {
    "historyTimeslotInMinutes": 5,
    "ControlSchema": "",
    "HistoryTableEnabled": false,
    "SuspendedTablesTableEnabled": false,
    "StatusTableEnabled": false,
    "FullLoadExceptionTableEnabled": false
  }
}
Option 2: Standard approach (matches CloudFormation template):
{
  "TargetMetadata": {
    "SupportLobs": true,
    "FullLobMode": true,
    "LobChunkSize": 32,
    "BatchApplyEnabled": true,
    "TaskRecoveryTableEnabled": false
  },
  "FullLoadSettings": {
    "TargetTablePrepMode": "DROP_AND_CREATE",
    "CreatePkAfterFullLoad": false,
    "MaxFullLoadSubTasks": 8,
    "TransactionConsistencyTimeout": 600,
    "CommitRate": 10000,
    "FullLoadIgnoreConflicts": true
  },
  "ValidationSettings": {
    "EnableValidation": true,
    "ValidationMode": "ROW_LEVEL",
    "ThreadCount": 5,
    "FailureMaxCount": 10000,
    "TableFailureMaxCount": 1000
  },
  "Logging": {
    "EnableLogging": true,
    "LogComponents": [
      {
        "Id": "SOURCE_UNLOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "SOURCE_CAPTURE",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TARGET_LOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TARGET_APPLY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TASK_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      }
    ]
  }
}
Configuration Comparison:
SettingSchema-FirstStandardNotes
TargetTablePrepModeDO_NOTHINGDROP_AND_CREATESchema-first uses existing schema
ChangeProcessingTuningIncludedNot neededExtra optimization for manual setup
Logging Components5 components5 componentsBoth include all DMS components
ValidationSettingsSameSameBoth use row-level validation
When to use each approach:
  • Schema-First: Production systems, complex schemas, performance-critical applications
  • Standard: Simple migrations, dev/test environments, when schema objects aren’t critical during migration

Step 7: Configure Table Mappings

Basic table mapping (migrate all tables):

{
  "rules": [
    {
      "rule-type": "selection",
      "rule-id": "1",
      "rule-name": "1",
      "object-locator": {
        "schema-name": "public",
        "table-name": "%"
      },
      "rule-action": "include",
      "filters": []
    }
  ]
}

Advanced table mapping with transformations:

{
  "rules": [
    {
      "rule-type": "selection",
      "rule-id": "1",
      "rule-name": "1",
      "object-locator": {
        "schema-name": "public",
        "table-name": "%"
      },
      "rule-action": "include"
    },
    {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-target": "schema",
      "object-locator": {
        "schema-name": "public"
      },
      "rule-action": "rename",
      "value": "public"
    }
  ]
}

Step 8: Start Migration Task

1

Review all task configurations
2

Click “Create task” to start the migration
3

Monitor the task status in the DMS console

Step 9: Monitor Migration Progress

Key metrics to monitor:

  • Full load progress: Percentage of tables loaded
  • CDC lag: Latency between source and target
  • Error count: Any migration errors
  • Throughput: Records per second

Using CloudWatch:

Set up CloudWatch alarms for:
  • High CDC latency
  • Migration errors
  • Task failures
# CLI command to check task status
aws dms describe-replication-tasks \
    --filters Name=replication-task-id,Values=your-task-id

Step 10: Verify Data Migration

Check table counts and data integrity:

-- Run on both source and target databases
SELECT
    schemaname,
    tablename,
    n_tup_ins as estimated_rows,
    n_tup_upd as updated_rows,
    n_tup_del as deleted_rows
FROM pg_stat_user_tables
ORDER BY schemaname, tablename;

Validate specific data:

-- Compare checksums for critical tables
SELECT count(*), md5(string_agg(column_name::text, ''))
FROM your_important_table
ORDER BY primary_key;

Step 11: Prepare for Cutover

Monitor CDC lag:

Ensure CDC latency is minimal (under 5 seconds) before cutover:
-- Check DMS validation status
SELECT * FROM awsdms_validation_failures_v1;

Test application connectivity:

  1. Create a read-only connection to PlanetScale for Postgres
  2. Test critical application queries with EXPLAIN ANALYZE
  3. Verify performance matches expectations (indexes should be working)
  4. Test sequence-dependent operations (INSERT operations)

Step 12: Post-Migration Sequence Synchronization

After DMS completes, sequences need their values synchronized:
Critical: Sequence SynchronizationSequence values must be set ahead of source database values to prevent duplicate key errors when applications start using PlanetScale.

Get Current Sequence Values from Source

-- Run on source database to get all current sequence values
SELECT
    sequence_name,
    last_value,
    'SELECT setval(''' || sequence_name || ''', ' || (last_value + 1000) || ');' as update_command
FROM information_schema.sequences
WHERE sequence_schema = 'public'
ORDER BY sequence_name;

Update Sequences in PlanetScale

-- For each sequence, run the update command from above
-- Example commands (values set ahead of source):
SELECT setval('users_id_seq', 16234);  -- Source value + 1000
SELECT setval('orders_id_seq', 99765);  -- Source value + 1000
SELECT setval('products_id_seq', 6432);  -- Source value + 1000

-- Verify sequence values are ahead of source
SELECT sequence_name, last_value
FROM information_schema.sequences
WHERE sequence_schema = 'public'
ORDER BY sequence_name;

Apply Remaining Constraints

Now apply foreign key constraints that were deferred:
-- Apply foreign key constraints
\i constraints.sql

-- Verify constraints were applied successfully
SELECT conname, contype, conrelid::regclass AS table_name
FROM pg_constraint
WHERE connamespace = 'public'::regnamespace
  AND contype = 'f'  -- foreign key constraints
ORDER BY conrelid::regclass::text;

Step 13: Comprehensive Pre-Cutover Validation

Complete Validation RequiredValidate ALL schema objects and data integrity before cutover. Missing objects will cause application failures.
-- Validate table row counts match source
SELECT
    schemaname,
    tablename,
    n_tup_ins as estimated_rows
FROM pg_stat_user_tables
WHERE schemaname = 'public'
ORDER BY tablename;

Step 14: Perform Cutover

When ready to switch to PlanetScale for Postgres:
1

Stop application writes to source database
2

Wait for CDC to catch up (monitor lag in DMS console)
3

Verify final data consistency
4

Update application connection strings to point to PlanetScale
5

Start application with new connections
6

Stop DMS task once satisfied with cutover

Stop the migration task:

aws dms stop-replication-task \
    --replication-task-arn arn:aws:dms:region:account:task:task-id

Step 15: Cleanup

The task configuration above is already optimized for schema-first migrations. Key settings:
  • DO_NOTHING prep mode preserves your existing schema
  • Row-level validation ensures data integrity
  • Batch processing optimizations improve performance
  • Memory tuning handles large datasets efficiently
Automated vs Manual ConfigurationFor Aurora migrations, consider using the automated CloudFormation approach which includes these optimized settings and additional automation features.
After successful cutover and schema migration:
1

Delete DMS task
2

Delete replication instance (to stop charges)
3

Remove source and target endpoints
4

Clean up security groups if created specifically for migration
# Cleanup commands
aws dms delete-replication-task --replication-task-arn your-task-arn
aws dms delete-replication-instance --replication-instance-arn your-instance-arn
aws dms delete-endpoint --endpoint-arn your-source-endpoint-arn
aws dms delete-endpoint --endpoint-arn your-target-endpoint-arn

Troubleshooting

Common Issues:

Connectivity problems:
  • Check security groups and network ACLs
  • Verify endpoint configurations
  • Test network connectivity from replication instance
Performance issues:
  • Increase replication instance size
  • Adjust parallel load settings
  • Monitor source database performance
Data type mapping issues: Large object (LOB) handling:
{
  "TargetMetadata": {
    "SupportLobs": true,
    "FullLobMode": true,
    "LobChunkSize": 32768,
    "LimitedSizeLobMode": false
  }
}
“sequence does not exist” errors:
-- Check if sequence exists
SELECT * FROM information_schema.sequences WHERE sequence_name = 'your_sequence';

-- Recreate missing sequence
CREATE SEQUENCE your_sequence START WITH 1;
SELECT setval('your_sequence', (SELECT MAX(id) FROM your_table));
Missing indexes causing performance issues:
-- Find missing indexes by comparing to source
-- Run on source database to get index list
SELECT indexname, indexdef FROM pg_indexes WHERE schemaname = 'public';

-- Check query performance
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM your_table WHERE indexed_column = 'value';
Foreign key constraint violations:
-- Check for constraint violations before applying
SELECT COUNT(*) FROM child_table c
WHERE NOT EXISTS (SELECT 1 FROM parent_table p WHERE p.id = c.parent_id);

-- Apply constraints one by one to isolate issues
ALTER TABLE child_table ADD CONSTRAINT fk_name FOREIGN KEY (parent_id) REFERENCES parent_table(id);
Functions/views with dependency errors:
-- Check dependencies
SELECT * FROM pg_depend WHERE objid = 'your_function'::regproc;

-- Apply in dependency order: functions before views that use them
Permission errors during schema application:
  • Ensure PlanetScale database user has CREATE permissions
  • Check if objects already exist and need DROP statements
  • Verify user has permissions on referenced objects
Sequence values too low causing duplicate key errors:
-- Check current sequence value vs max table value
SELECT last_value FROM your_sequence;
SELECT MAX(id) FROM your_table;

-- Update sequence to safe value
SELECT setval('your_sequence', (SELECT MAX(id) FROM your_table));

Performance Optimization:

  1. Parallel loading: Increase MaxFullLoadSubTasks
  2. Batch apply: Enable for better target performance
  3. Memory allocation: Increase replication instance size
  4. Network optimization: Use placement groups for better network performance

Cost Optimization

  • Instance sizing: Start with smaller instances and scale up if needed
  • Multi-AZ: Disable for dev/test migrations
  • Task lifecycle: Delete resources immediately after successful migration
  • Data transfer: Consider AWS region placement to minimize transfer costs

Schema Considerations

Before migration, review: Important: Plan additional time for post-migration schema object setup. Complex databases may require several hours for index recreation and sequence synchronization. Performance Impact Note: Large indexes can take hours to rebuild on populated tables. Consider the schema-first approach to avoid this performance penalty.

Next Steps

After successful migration and schema setup:
1

Run comprehensive post-cutover validation using all verification queries above
2

Monitor application logs for any sequence or constraint errors
3

Performance baseline comparison - compare query performance to source database
4

Test critical business workflows end-to-end
5

Set up monitoring and alerting for PlanetScale for Postgres
6

Plan for ongoing maintenance and backup strategies
7

Consider implementing additional PlanetScale features
Success Criteria:
  • ✅ All schema objects validated and functional
  • ✅ Sequence values synchronized and tested
  • ✅ Query performance matches or exceeds source database
  • ✅ No application errors in logs for 24+ hours
  • ✅ All foreign key constraints working correctly
For simpler migrations, consider pg_dump/restore or WAL streaming methods. If you encounter issues during migration, please reach out to support for assistance.

Need help?

Get help from the PlanetScale Support team, or join our GitHub discussion board to see how others are using PlanetScale.
I