Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: 关于在线DDL的一个问题
[Test/Poc Environment] TiDB
[TiDB Version] 7.5.1
Changing a table column from varchar(100) to varchar(200) is very fast, at the millisecond level. However, changing from varchar(200) to varchar(100) is very slow. When checking with admin show ddl jobs, the operation status is “write reorganization”. What is the reason for this?
Changing varchar(100) to varchar(200) should only require modifying the metadata.
Changing varchar(200) to varchar(100) requires backfilling data, which means rewriting everything.
The size of general types usually increases rather than decreases. Why do you need to reduce it?
All shrinking involves clearing the original data and then writing it back.
Considering the data issue
Why does it get smaller? What kind of operation is this? If it gets smaller, to ensure the data is correct, shouldn’t it need to be rewritten?
It feels like it’s caused by simple internal processing logic.
Expanding a field only requires modifying some metadata, but shortening it involves the issue of legality after shortening.
I guess TiDB’s internal logic does not record the current maximum length of the table field data. If it recorded the maximum length of all current rows, it should be possible to just modify the metadata, right?
The difference between one that does not process data and one that needs to process data.
Will it automatically truncate when it gets smaller?
What if it exceeds 100? Should we exit or continue?
It will report an error and exit.
I think one is logical DDL, and the other is physical DDL. Physical DDL will perform reorg operations, so it will be relatively slow.
Principles and Best Practices of DDL Statements | PingCAP Documentation Center
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.