Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 用pytispark写入从tidb的一个表写到另一个表性能很差

As shown in the picture,
Using PySpark to execute “insert into table t1 as select * from s1” for such a simple data import, it takes about two hours to import 40 million records. Are there any parameters that can be optimized?