After deleting around 900 indexes, the IO was fully utilized

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 删除900来个索引后 io直接打满了

| username: zzw6776

[TiDB Usage Environment] Production Environment
[TiDB Version] v6.5.1

First of all, it can be confirmed that the issue is not due to the lack of indexes causing MySQL problems because these indexes have been hidden for a month and indeed no one is using them.

Deleted about 900 indexes at once, and then the IO was directly maxed out. Not sure if it’s a GC issue.
Currently, the operation is to limit it through:

tiup ctl:v6.5.1 tikv --host xxxxx:xxx modify-tikv-config -n gc.max-write-bytes-per-sec -v 1MB

It feels like it has no effect.

Now the IO has come down. Not sure if the command took effect or if the GC stopped. Do you need any more information?

Currently, there are still over 700 gc_delete_range

gc_delete_range_done has completed 155

Slow SQL monitoring is as follows

| username: 像风一样的男子 | Original post link

Run admin show ddl jobs to check the progress of DDL operations. Also, check the dashboard for slow queries and other related information.

| username: realcp1018 | Original post link

I previously bookmarked a large-scale deletion practice, but it has now become invalid. However, you can check the effective values of related metrics through information_schema.cluster_config. I took a screenshot of what I had bookmarked for your reference:

| username: zzw6776 | Original post link

Deleting indexes is done directly, waiting for GC. You can’t see much in the DDL job.

| username: zzw6776 | Original post link

gc.max-write-bytes-per-sec
I can’t find the default value for this parameter…
Could it be that the default is 0, meaning no limit?

| username: 像风一样的男子 | Original post link

The default disk writes as fast as it can.

| username: realcp1018 | Original post link

Yes, as shown above, the default is no limit.