Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 【TiDB 社区升级互助材料】TiDB 版本升级最全材料包

1. Upgrade Version Selection
TiDB 7.5.1 Release Notes: TiDB 7.5.1 Release Notes | PingCAP 文档中心
TiDB 7.1.5 Release Notes: TiDB 7.1.5 Release Notes | PingCAP 文档中心
TiDB 6.5.9 Release Notes: TiDB 6.5.9 Release Notes | PingCAP 文档中心
Main differences between TiDB 6.X and 7.X versions: 7.X versions have Resource Control functionality.
7.5.x Related Feature Interpretation
7.1.x Related Feature Interpretation
-
New Feature Analysis丨Design and Scenario Analysis of TiDB Resource Control
-
TiDB v7.1.0 Version Related (Deployment, Online Expansion, Data Migration) Testing
-
TiDB 7.1.0 LTS Feature Interpretation | Brief Analysis of TiSpark v3.x New Changes
-
TiDB 7.1.0 LTS Feature Interpretation | 6 Things You Should Know About Resource Control
-
TiDB 7.x Source Code Compilation of TiDB Server and New Feature Explanation
-
TiDB 7.x Source Code Compilation of TiUP and New Feature Analysis
6.5.x Related Feature Interpretation
-
Column - Interpretation of New Features in TiDB (6.0~6.6) | TiDB Community
-
Column - The Ultimate Speed: TiDB Online DDL Performance Improvement by 10 Times | TiDB Community
2. Upgrade Plan Selection
Reference: 专栏 - TiDB 升级方案选择 | TiDB 社区
3. Introduction to Upgrade Tools & FAQ
TiDB package manager on physical or virtual machines, managing many components of TiDB such as TiDB, PD, TiKV, etc. When you want to run any component in the TiDB ecosystem, you only need to execute one TiUP command (starting from TiDB v4.0).
TiUP Documentation: TiUP 简介 | PingCAP 文档中心
TiUP FAQ: TiUP FAQ | PingCAP 文档中心
Data export tool that can export data stored in TiDB or MySQL to SQL or CSV format for logical full backup. Dumpling also supports exporting data to Amazon S3.
Dumpling Documentation: 使用 Dumpling 导出数据 | PingCAP 文档中心
Data import tool used to import TB-level data from static files into TiDB clusters, commonly used for initial data import of TiDB clusters.
Lightning Documentation: TiDB Lightning 简介 | PingCAP 文档中心
Requirements for Importing (New) Databases: TiDB Lightning 目标数据库要求 | PingCAP 文档中心
Lightning Common Troubleshooting: TiDB Lightning 故障处理 | PingCAP 文档中心
Incremental data synchronization tool that achieves TiDB incremental data synchronization by pulling TiKV change logs. Typical application scenarios of TiCDC include setting up master-slave replication between multiple TiDB clusters or building data integration services with other heterogeneous systems.
TiCDC Documentation: TiCDC 简介 | PingCAP 文档中心
TiCDC Common Issues and Solutions: TiCDC 故障处理 | PingCAP 文档中心
4. What Preparations Should Be Made Before Upgrading?
-
Upgrade FAQ: 升级与升级后常见问题 | PingCAP 文档中心
-
Changes in TiDB Feature Support Across Different Versions: TiDB 功能概览 | PingCAP 文档中心
-
Understanding the Health Status of the System
-
Confirm whether the cluster topology meets high availability requirements
-
Check if the cluster topology is healthy
-
Ensure hardware configuration meets standards
-
Review cluster usage
-
Cluster data volume
-
Large table situations
-
Table width, number of fields
-
SQL statement DDL\DML execution status QPS
-
Charset compatibility
-
Common Issues During Upgrade
(1) What are the impacts of rolling upgrades?
During the rolling upgrade of TiDB, business operations will be affected to some extent. Therefore, it is not recommended to perform rolling upgrades during peak business hours. The minimum cluster topology configuration (TiDB * 2, PD * 3, TiKV * 3) is required. If there are Pump and Drainer services in the cluster environment, it is recommended to stop Drainer first, then perform the rolling upgrade (Pump will be upgraded when TiDB is upgraded).
(2) Can the cluster be upgraded while executing DDL requests?
-
If the TiDB version before the upgrade is lower than v7.1.0:
-
Do not perform upgrade operations when there are DDL statements being executed in the cluster (usually long-running DDL statements such as
ADD INDEX
and column type changes). Before upgrading, it is recommended to use theADMIN SHOW DDL
command to check if there are any ongoing DDL jobs in the cluster. If you need to upgrade, wait for the DDL to complete or use theADMIN CANCEL DDL
command to cancel the DDL job before upgrading. -
Do not execute DDL statements during the TiDB cluster upgrade process, as this may result in undefined behavior.
-
-
If the TiDB version before the upgrade is v7.1.0 or higher:
- You do not need to follow the restrictions of lower version upgrades, i.e., you can receive user DDL tasks during the upgrade. It is recommended to refer to Smooth Upgrade of TiDB.
7.5.x Upgrade Practices
7.1.x Upgrade Practices
-
A 39.3T Cluster Migration and Upgrade Practice from TiDB v3.1.0 to TiDB v7.1.2
-
Complete Offline Upgrade to TIDB v7.1 in Three Simple Steps (Server Without Internet)
-
TiDB v7.1.1 Three-Region Five-Center, Best Practice Exploration of TiDB POC
-
TiDB Same-City Dual-Center Monitoring Component High Availability Solution
-
A Complete Setup Process of TiDB v7.1 Version in Production Environment
-
HAProxy Installation and Practical Load Balancing Service Setup for TiDB Database
-
Practical Setup of TiDB Load Balancing Environment - LVS+KeepAlived
-
Practical Setup of TiDB Load Balancing Environment - HAproxy+KeepAlived
-
TiDB v7.1.0: Precise Resource Allocation for Smooth Data Operation!
During Upgrade
This document applies to the following upgrade paths:
Using TiUP to upgrade from TiDB 4.0 to TiDB 7.5.
Using TiUP to upgrade from TiDB 5.0-5.4 to TiDB 7.5.
Using TiUP to upgrade from TiDB 6.0-6.6 to TiDB 7.5.
Using TiUP to upgrade from TiDB 7.0-7.4 to TiDB 7.5.
Warning
TiFlash components cannot be upgraded online from versions earlier than 5.3 to versions 5.3 and later, only offline upgrades are supported. If other components in the cluster (such as tidb, tikv) cannot be upgraded offline, refer to the precautions in Non-Stop Upgrade.
Do not execute DDL statements during the TiDB cluster upgrade process, as this may result in undefined behavior.
Do not perform upgrade operations when there are DDL statements being executed in the cluster (usually long-running DDL statements such as
ADD INDEX
and column type changes). Before upgrading, it is recommended to use theADMIN SHOW DDL
command to check if there are any ongoing DDL jobs in the cluster. If you need to upgrade, wait for the DDL to complete or use theADMIN CANCEL DDL
command to cancel the DDL job before upgrading.When upgrading from TiDB v7.1 to a higher version, you can ignore restrictions 2 and 3 above. It is recommended to refer to the Smooth Upgrade of TiDB Restrictions.
After Upgrade
Common Issues After Upgrade
This section lists some issues that may be encountered after the upgrade and their solutions.
Charset Issues When Executing DDL Operations
In TiDB v2.1.0 and earlier versions (including all v2.0 versions), the default charset is UTF8. Starting from v2.1.1, the default charset has been changed to UTF8MB4. If the charset of the table was explicitly specified as UTF8 when creating the table in v2.1.0 and earlier versions, executing DDL operations may fail after upgrading to v2.1.1.
To avoid this issue, pay attention to the following two points:
-
Before v2.1.3, TiDB does not support modifying the charset of columns. Therefore, when executing DDL operations, the charset of the new column needs to be consistent with the charset of the old column.
-
Before v2.1.3, even if the charset of the column is different from the charset of the table,
show create table
will not display the charset of the column, but you can view the charset of the column by obtaining the table metadata through the HTTP API, as shown in the example below.
unsupported modify column charset utf8mb4 not match origin utf8
- Before Upgrade: v2.1.0 and earlier versions
create table t(a varchar(10)) charset=utf8;
Query OK, 0 rows affected
Time: 0.106s
show create table t
+-------+-------------------------------------------------------+
| Table | Create Table |
+-------+-------------------------------------------------------+
| t | CREATE TABLE `t` ( |
| | `a` varchar(10) DEFAULT NULL |
| | ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin |
+-------+-------------------------------------------------------+
1 row in set
Time: 0.006s
- After Upgrade: v2.1.1, v2.1.2 will encounter the following issue, v2.1.3 and later versions will not encounter the following issue.
alter table t change column a a varchar(20);
ERROR 1105 (HY000): unsupported modify column charset utf8mb4 not match origin utf8
Solution: Explicitly specify the column charset to be consistent with the original charset.
alter table t change column a a varchar(22) character set utf8;
-
According to point 1, if the column charset is not specified here, the default UTF8MB4 will be used, so the column charset needs to be specified to be consistent with the original charset.
-
According to point 2, use the HTTP API to obtain the table metadata, and then search for the column charset based on the column name and Charset keyword.
curl "http://$IP:10080/schema/test/t" | python -m json.tool
- Here, the python tool for formatting json is used, but it can also be omitted, as it is only for convenience of annotation.
{“ShardRowIDBits”: 0,“auto_inc_id”: 0,“charset”: “utf8”,table charset"collate": “”,“cols”: [ # Column-related information starts here{ …“id”: 1,“name”: {“L”: “a”,“O”: “a"column name},“offset”: 0,“origin_default”: null,“state”: 5,“type”: {“Charset”: “utf8”,column a charset"Collate”: “utf8_bin”,“Decimal”: 0,“Elems”: null,“Flag”: 0,“Flen”: 10,“Tp”: 15}}], … }
unsupported modify charset from utf8mb4 to utf8
- Before Upgrade: v2.1.1, v2.1.2
create table t(a varchar(10)) charset=utf8;
Query OK, 0 rows affected
Time: 0.109s
show create table t;
+-------+-------------------------------------------------------+
| Table | Create Table |
+-------+-------------------------------------------------------+
| t | CREATE TABLE `t` ( |
| | `a` varchar(10) DEFAULT NULL |
| | ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin |
+-------+-------------------------------------------------------+
-
The above
show create table
only shows the table charset, but actually the column charset is UTF8MB4, which can be confirmed by obtaining the schema through the HTTP API. This is a bug, i.e., the column charset should be consistent with the table charset as UTF8 when creating the table. This issue has been fixed in v2.1.3. -
After Upgrade: v2.1.3 and later versions
show create table t;
+-------+--------------------------------------------------------------------+
| Table | Create Table |
+-------+--------------------------------------------------------------------+
| t | CREATE TABLE `t` ( |
| | `a` varchar(10) CHARSET utf8mb4 COLLATE utf8