Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions TOC-tidb-cloud-premium.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,9 +134,6 @@
- [Connect via Private Endpoint with Alibaba Cloud](/tidb-cloud/premium/connect-to-premium-via-alibaba-cloud-private-endpoint.md)
- [Back Up and Restore TiDB Cloud Data](/tidb-cloud/premium/backup-and-restore-premium.md)
- [Export Data from {{{ .premium }}}](/tidb-cloud/premium/premium-export.md)
- [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md)
- [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md)
- [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md)
- Use TiFlash for HTAP
- [TiFlash Overview](/tiflash/tiflash-overview.md)
- [Create TiFlash Replicas](/tiflash/create-tiflash-replicas.md)
Expand Down Expand Up @@ -205,6 +202,8 @@
- Migrate or Import Data
- [Overview](/tidb-cloud/tidb-cloud-migration-overview.md)
- Migrate Data into TiDB Cloud
- [Migrate Existing and Incremental Data Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md)
- [Migrate Incremental Data Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md)
- [Migrate from TiDB Self-Managed to TiDB Cloud Premium](/tidb-cloud/premium/migrate-from-op-tidb-premium.md)
- [Migrate and Merge MySQL Shards of Large Datasets](/tidb-cloud/migrate-sql-shards.md)
- [Migrate from Amazon RDS for Oracle Using AWS DMS](/tidb-cloud/migrate-from-oracle-using-aws-dms.md)
Expand Down
53 changes: 51 additions & 2 deletions tidb-cloud/migrate-from-mysql-using-data-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This document guides you through migrating your MySQL databases from Amazon Auro

> **Note:**
>
> Currently, the Data Migration feature is in Public Preview for {{{ .premium }}}. For a {{{ .premium }}}-focused overview, see [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md).
> Currently, the Data Migration feature is in public preview for {{{ .premium }}}.

</CustomContent>

Expand All @@ -42,6 +42,15 @@ If you only want to replicate ongoing binlog changes from your MySQL-compatible

- Amazon Aurora MySQL writer instances support both existing data and incremental data migration. Amazon Aurora MySQL reader instances only support existing data migration and do not support incremental data migration.

<CustomContent plan="premium">

- The Data Migration feature for {{{ .premium }}} is in public preview.

- You cannot save or reuse source connection details across migration jobs.
- During public preview, additional restrictions might apply to migration jobs as the feature matures. For more information, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).

</CustomContent>

### Maximum number of migration jobs

<CustomContent plan="dedicated">
Expand All @@ -53,6 +62,11 @@ You can create up to 200 migration jobs on {{{ .dedicated }}} clusters for each

You can create up to 100 migration jobs on {{{ .essential }}} instances for each organization. To create more migration jobs, you need to [file a support ticket](/tidb-cloud/tidb-cloud-support.md).

</CustomContent>
<CustomContent plan="premium">

You can create up to xxx TODO migration jobs on {{{ .premium }}} instances for each organization. To create more migration jobs, you need to [file a support ticket](/tidb-cloud/tidb-cloud-support.md).
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alastori Please help confirm the quota for the Premium.

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code-verified (AI-assisted): Premium DM has no enforced quota. dataflow-service-ng/app/controllers/premium_dm/premium_dm.go validateCreateTaskReq and isWhitelisted are no-ops, no GetPremiumDMQuota RPC in proto, and no quota field in CreatePremiumMigrationReq. Confirming with Leon; if a quota is enforced, we will document it in a follow-up PR. For now, the Premium variant of this section is dropped on premium-data-migration-8.5.


</CustomContent>

### Filtered out and deleted databases
Expand Down Expand Up @@ -96,7 +110,7 @@ To prevent this, create the target tables in the downstream database before star

<CustomContent plan="premium">

- For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports rows as SQL statements and replays them on the target instance, consuming Request Capacity Units (RCUs) on the target during the load. Physical mode uses `IMPORT INTO` on the target instance and is recommended for large datasets where load throughput and cost are priorities.
- For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports rows as SQL statements and replays them on the target {{{ .premium }}} instance, consuming Request Capacity Units (RCUs) during the load. Physical mode uses `IMPORT INTO` on the target {{{ .premium }}} instance, which is recommended for large datasets where load throughput and cost are priorities.
- When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
- When you use physical mode, you cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed.

Expand All @@ -122,6 +136,12 @@ To prevent this, create the target tables in the downstream database before star

</CustomContent>

<CustomContent plan="premium">

- During incremental data migration (migrating ongoing changes to your {{{ .premium }}} instance), if the migration job recovers from an abrupt error, it might open the safe mode for 60 seconds. During the safe mode, `INSERT` statements are migrated as `REPLACE`, `UPDATE` statements as `DELETE` and `REPLACE`, and then these transactions are migrated to the target {{{ .premium }}} instance to ensure that all the data during the abrupt error has been migrated smoothly to the target {{{ .premium }}} instance. In this scenario, for MySQL source tables without primary keys or non-null unique indexes, some data might be duplicated in the target {{{ .premium }}} instance because the data might be inserted repeatedly into the target {{{ .premium }}} instance.

</CustomContent>

## Prerequisites

Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target <CustomContent plan="dedicated">{{{ .dedicated }}} cluster</CustomContent><CustomContent plan="essential">{{{ .essential }}} instance</CustomContent><CustomContent plan="premium">{{{ .premium }}} instance</CustomContent> database.
Expand Down Expand Up @@ -310,6 +330,17 @@ For {{{ .essential }}}, the available connection methods are as follows:

</CustomContent>

<CustomContent plan="premium">

For {{{ .premium }}}, only public connectivity to the source database is supported. Make sure that:

- The source database accepts inbound connections from the public IP ranges that TiDB Cloud Data Migration provides during migration job creation.
- Any firewall, security group, or network ACL between the source database and the {{{ .premium }}} instance allows traffic on the source database port (typically `3306`).

The target {{{ .premium }}} instance must also be reachable. If the public endpoint of the target {{{ .premium }}} instance is disabled, enable it before creating the migration job. For more information, see [Connect to {{{ .premium }}} via Public Connection](/tidb-cloud/premium/connect-to-premium-via-public-connection.md).

</CustomContent>

Choose a connection method that best fits your cloud provider, network topology, and security requirements, and then follow the setup instructions for that method.

#### End-to-end encryption over TLS/SSL
Expand Down Expand Up @@ -568,6 +599,12 @@ On the **Create Migration Job** page, configure the source and target connection

</CustomContent>

<CustomContent plan="premium">

- **Connectivity method**: select **Public**.

</CustomContent>

<CustomContent plan="dedicated">

- Based on the selected **Connectivity method**, do the following:
Expand All @@ -587,6 +624,12 @@ On the **Create Migration Job** page, configure the source and target connection

</CustomContent>

<CustomContent plan="premium">

- Fill in the **Hostname or IP address** field with the public hostname or IP address of the data source.

</CustomContent>

- **Port**: the port of the data source.
- **User Name**: the username of the data source.
- **Password**: the password of the username.
Expand Down Expand Up @@ -641,6 +684,12 @@ On the **Create Migration Job** page, configure the source and target connection

</CustomContent>

<CustomContent plan="premium">

Add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).

</CustomContent>

## Step 3: Choose migration job type

<CustomContent plan="dedicated">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,14 @@ This document describes how to migrate incremental data from a MySQL-compatible

</CustomContent>

<CustomContent plan="premium">

> **Note:**
>
> Currently, the Data Migration feature is in public preview for {{{ .premium }}}.

</CustomContent>

For instructions about how to migrate existing data or both existing data and incremental data, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md).

## Limitations
Expand Down Expand Up @@ -162,7 +170,7 @@ On the **Create Migration Job** page, configure the source and target connection

- **Data source**: the data source type.
- **Region**: the region of the data source, which is required for cloud databases only.
- **Connectivity method**: the connection method for the data source. <CustomContent plan="dedicated">Currently, you can choose public IP, VPC Peering, or Private Link according to your connection method.</CustomContent><CustomContent plan="essential">You can choose public IP or Private Link according to your connection method.</CustomContent>
- **Connectivity method**: the connection method for the data source. <CustomContent plan="dedicated">Currently, you can choose public IP, VPC Peering, or Private Link according to your connection method.</CustomContent><CustomContent plan="essential">You can choose public IP or Private Link according to your connection method.</CustomContent><CustomContent plan="premium">Select **Public**.</CustomContent>

<CustomContent plan="dedicated">

Expand All @@ -175,6 +183,11 @@ On the **Create Migration Job** page, configure the source and target connection
- **Hostname or IP address** (for public IP): the hostname or IP address of the data source.
- **Private Link Connection** (for Private Link): the private link connection that you created in the [Private Link Connections](/tidb-cloud/serverless-private-link-connection.md) section.

</CustomContent>
<CustomContent plan="premium">

- **Hostname or IP address**: the public hostname or IP address of the data source.

</CustomContent>

- **Port**: the port of the data source.
Expand Down Expand Up @@ -206,6 +219,12 @@ On the **Create Migration Job** page, configure the source and target connection

</CustomContent>

<CustomContent plan="premium">

Add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).

</CustomContent>

## Step 3: Choose migration job type

To migrate only the incremental data of the source database to TiDB Cloud, select **Incremental data migration** and do not select **Existing data migration**. In this way, the migration job only migrates ongoing changes of the source database to TiDB Cloud.
Expand Down
Loading
Loading