Overview
This migration method involves:1
Pre-migration schema setup (strongly recommended for production)
2
Setting up AWS DMS replication instance and endpoints
3
Configuring source and target database connections
4
Creating and running migration tasks with full load and CDC
5
Monitoring migration progress and performing cutover
Critical: AWS DMS Schema Object LimitationsAWS DMS only migrates table data and primary keys. All other PostgreSQL schema objects must be handled separately:
- Secondary indexes
- Sequences and their current values
- Views, functions, and stored procedures
- Constraints (foreign keys, unique, check)
- Triggers and custom data types
NoteThis method requires an AWS account and will incur AWS DMS charges. Review AWS DMS pricing before proceeding.
NoteFor Aurora users: Consider the Aurora to PlanetScale CloudFormation & DMS tutorial for a fully automated approach using CloudFormation templates and Step Functions workflows instead of manual DMS setup.
Prerequisites
Before starting the migration:- Active AWS account with appropriate DMS permissions
- Source PostgreSQL database accessible from AWS (consider VPC configuration)
- Connection details for your PlanetScale for Postgres database from the console
- Ensure the disk on your PlanetScale database has at least 150% of the capacity of your source database. If you are migrating to a PlanetScale database backed by network-attached storage, you can resize your disk manually by setting the “Minimum disk size.” If you are using Metal, you will need to select a size when first creating your database. For example, if your source database is 330GB, you should have at least 500GB of storage available on PlanetScale.
- Understanding of your data transformation requirements (if any)
- Network connectivity between AWS and both source and target databases
Step 1: Pre-Migration Schema Setup
Deploy your complete schema to PlanetScale BEFORE starting DMS migration. This ensures optimal performance and prevents application errors.Extract and Apply Schema
1
Extract your complete schema from the source PostgreSQL database:
2
Apply the schema to PlanetScale:
Foreign Key ConstraintsIf the schema application fails due to foreign key constraint issues, you can temporarily remove them from the schema file and apply them after DMS completes the data migration.
Verify Schema Application
Quickly verify your schema was applied successfully:Step 2: Set Up AWS DMS
Create DMS Replication Instance
Configure Security Groups
Ensure your replication instance can connect to:- Source PostgreSQL database (port 5432)
- PlanetScale for Postgres (port 5432)
- Internet for PlanetScale connectivity
Step 3: Create Source Endpoint
Configure PostgreSQL source endpoint:
1
In DMS Console, go to “Endpoints” > “Create endpoint”
2
Configure source endpoint:
Advanced settings for PostgreSQL:
Step 4: Create Target Endpoint
Configure PlanetScale for Postgres target endpoint:
1
Create target endpoint with PlanetScale connection details:
SSL Configuration:
Step 5: Test Endpoints
1
Select your source endpoint and click “Test connection”
2
Select your target endpoint and click “Test connection”
3
Ensure both tests pass before proceeding
Step 6: Create Migration Task
Configure the migration task:
1
Go to “Database migration tasks” > “Create task”
2
Configure task settings:
Task Settings
Option 1: Schema-first approach (recommended for production):Setting | Schema-First | Standard | Notes |
---|---|---|---|
TargetTablePrepMode | DO_NOTHING | DROP_AND_CREATE | Schema-first uses existing schema |
ChangeProcessingTuning | Included | Not needed | Extra optimization for manual setup |
Logging Components | 5 components | 5 components | Both include all DMS components |
ValidationSettings | Same | Same | Both use row-level validation |
- Schema-First: Production systems, complex schemas, performance-critical applications
- Standard: Simple migrations, dev/test environments, when schema objects aren’t critical during migration
Step 7: Configure Table Mappings
Basic table mapping (migrate all tables):
Advanced table mapping with transformations:
Step 8: Start Migration Task
1
Review all task configurations
2
Click “Create task” to start the migration
3
Monitor the task status in the DMS console
Step 9: Monitor Migration Progress
Key metrics to monitor:
- Full load progress: Percentage of tables loaded
- CDC lag: Latency between source and target
- Error count: Any migration errors
- Throughput: Records per second
Using CloudWatch:
Set up CloudWatch alarms for:- High CDC latency
- Migration errors
- Task failures
Step 10: Verify Data Migration
Check table counts and data integrity:
Validate specific data:
Step 11: Prepare for Cutover
Monitor CDC lag:
Ensure CDC latency is minimal (under 5 seconds) before cutover:Test application connectivity:
- Create a read-only connection to PlanetScale for Postgres
- Test critical application queries with EXPLAIN ANALYZE
- Verify performance matches expectations (indexes should be working)
- Test sequence-dependent operations (INSERT operations)
Step 12: Post-Migration Sequence Synchronization
After DMS completes, sequences need their values synchronized:Critical: Sequence SynchronizationSequence values must be set ahead of source database values to prevent duplicate key errors when applications start using PlanetScale.
Get Current Sequence Values from Source
Update Sequences in PlanetScale
Apply Remaining Constraints
Now apply foreign key constraints that were deferred:Step 13: Comprehensive Pre-Cutover Validation
Complete Validation RequiredValidate ALL schema objects and data integrity before cutover. Missing objects will cause application failures.
Step 14: Perform Cutover
When ready to switch to PlanetScale for Postgres:1
Stop application writes to source database
2
Wait for CDC to catch up (monitor lag in DMS console)
3
Verify final data consistency
4
Update application connection strings to point to PlanetScale
5
Start application with new connections
6
Stop DMS task once satisfied with cutover
Stop the migration task:
Step 15: Cleanup
The task configuration above is already optimized for schema-first migrations. Key settings:- DO_NOTHING prep mode preserves your existing schema
- Row-level validation ensures data integrity
- Batch processing optimizations improve performance
- Memory tuning handles large datasets efficiently
Automated vs Manual ConfigurationFor Aurora migrations, consider using the automated CloudFormation approach which includes these optimized settings and additional automation features.
1
Delete DMS task
2
Delete replication instance (to stop charges)
3
Remove source and target endpoints
4
Clean up security groups if created specifically for migration
Troubleshooting
Common Issues:
Connectivity problems:- Check security groups and network ACLs
- Verify endpoint configurations
- Test network connectivity from replication instance
- Increase replication instance size
- Adjust parallel load settings
- Monitor source database performance
- Review DMS data type mapping
- Use transformation rules for custom mappings
Schema-Related Troubleshooting:
“sequence does not exist” errors:- Ensure PlanetScale database user has CREATE permissions
- Check if objects already exist and need DROP statements
- Verify user has permissions on referenced objects
Performance Optimization:
- Parallel loading: Increase
MaxFullLoadSubTasks
- Batch apply: Enable for better target performance
- Memory allocation: Increase replication instance size
- Network optimization: Use placement groups for better network performance
Cost Optimization
- Instance sizing: Start with smaller instances and scale up if needed
- Multi-AZ: Disable for dev/test migrations
- Task lifecycle: Delete resources immediately after successful migration
- Data transfer: Consider AWS region placement to minimize transfer costs
Schema Considerations
Before migration, review: Important: Plan additional time for post-migration schema object setup. Complex databases may require several hours for index recreation and sequence synchronization. Performance Impact Note: Large indexes can take hours to rebuild on populated tables. Consider the schema-first approach to avoid this performance penalty.Next Steps
After successful migration and schema setup:1
Run comprehensive post-cutover validation using all verification queries above
2
Monitor application logs for any sequence or constraint errors
3
Performance baseline comparison - compare query performance to source database
4
Test critical business workflows end-to-end
5
Set up monitoring and alerting for PlanetScale for Postgres
6
Plan for ongoing maintenance and backup strategies
7
Consider implementing additional PlanetScale features
- ✅ All schema objects validated and functional
- ✅ Sequence values synchronized and tested
- ✅ Query performance matches or exceeds source database
- ✅ No application errors in logs for 24+ hours
- ✅ All foreign key constraints working correctly