Tidb-server is excessively logging

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb-server疯狂打印日志

| username: 普罗米修斯

[TiDB Usage Environment] Production Environment
[TiDB Version] v5.2.4
[Encountered Problem: Phenomenon and Impact]
There are two tidb-servers. One (192.168.90.204) has a normal log refresh rate, while the other (192.168.90.203) is crazily printing INFO logs, refreshing several pages per second.
Here are the log contents:
[2023/11/29 13:20:04.180 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:618.388136ms txnStartTS:445968601212256324 region_id:280099952 store_addr:192.168.80.211:20160 kv_process_ms:608 kv_wait_ms:12 kv_read_ms:548 processed_versions:563022 total_versions:563023 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:563022 rocksdb_cache_hit_count:1607 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=33]
[2023/11/29 13:20:04.200 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:496.161498ms txnStartTS:445968601029017656 region_id:298743525 store_addr:192.168.80.211:20161 kv_process_ms:492 kv_wait_ms:4 kv_read_ms:440 processed_versions:519848 total_versions:519849 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:519848 rocksdb_cache_hit_count:1490 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.202 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:639.772497ms txnStartTS:445968601212256324 region_id:283718028 store_addr:192.168.80.211:20160 kv_process_ms:628 kv_wait_ms:12 kv_read_ms:548 processed_versions:520844 total_versions:520845 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:520844 rocksdb_cache_hit_count:1490 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=33]
[2023/11/29 13:20:04.273 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:401.801006ms txnStartTS:445968601029017656 region_id:291068788 store_addr:192.168.80.210:20161 kv_process_ms:404 kv_wait_ms:0 kv_read_ms:324 processed_versions:521020 total_versions:521021 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:521020 rocksdb_cache_hit_count:1490 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.307 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:473.510407ms txnStartTS:445968601029017656 region_id:291045049 store_addr:192.168.80.212:20160 kv_process_ms:472 kv_wait_ms:0 kv_read_ms:416 processed_versions:526086 total_versions:526087 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:526086 rocksdb_cache_hit_count:1489 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.317 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:431.285655ms txnStartTS:445968601029017656 region_id:295989628 store_addr:192.168.80.210:20161 kv_process_ms:432 kv_wait_ms:0 kv_read_ms:388 processed_versions:524295 total_versions:524296 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:524295 rocksdb_cache_hit_count:1491 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.333 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:474.009503ms txnStartTS:445968601212256324 region_id:279753718 store_addr:192.168.80.211:20161 kv_process_ms:464 kv_wait_ms:8 kv_read_ms:388 processed_versions:521834 total_versions:521835 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:521834 rocksdb_cache_hit_count:1491 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=33]
[2023/11/29 13:20:04.343 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:621.57264ms txnStartTS:445968601029017656 region_id:286151582 store_addr:192.168.80.211:20160 kv_process_ms:620 kv_wait_ms:0 kv_read_ms:560 processed_versions:523556 total_versions:523557 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:523556 rocksdb_cache_hit_count:1490 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.368 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:568.601094ms txnStartTS:445968601029017656 region_id:286304861 store_addr:192.168.80.212:20161 kv_process_ms:568 kv_wait_ms:0 kv_read_ms:500 processed_versions:520161 total_versions:520162 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:520161 rocksdb_cache_hit_count:1492 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.378 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:483.113367ms txnStartTS:445968601029017656 region_id:299189800 store_addr:192.168.80.218:20160 kv_process_ms:480 kv_wait_ms:0 kv_read_ms:424 processed_versions:524452 total_versions:524453 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:524452 rocksdb_cache_hit_count:1489 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.388 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:451.893736ms txnStartTS:445968601029017656 region_id:299196928 store_addr:192.168.80.211:20161 kv_process_ms:452 kv_wait_ms:0 kv_read_ms:392 processed_versions:524163 total_versions:524164 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:524163 rocksdb_cache_hit_count:1491 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.417 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:504.881838ms txnStartTS:445968601212256324 region_id:291476718 store_addr:192.168.80.210:20160 kv_process_ms:504 kv_wait_ms:0 kv_read_ms:444 processed_versions:522649 total_versions:522650 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:522649 rocksdb_cache_hit_count:1491 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=33]
[2023/11/29 13:20:04.436 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:489.422661ms txnStartTS:445968601212256324 region_id:285104395 store_addr:192.168.80.211:20161 kv_process_ms:492 kv_wait_ms:0 kv_read_ms:444 processed_versions:522002 total_versions:522003 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:522002 rocksdb_cache_hit_count:1489 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=33]
[2023/11/29 13:20:04.450 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:474.82598ms txnStartTS:445968601029017656 region_id:300282259 store_addr:192.168.80.212:20160 kv_process_ms:476 kv_wait_ms:0 kv_read_ms:436 processed_versions:523547 total_versions:523548 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:523547 rocksdb_cache_hit_count:1489 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
[2023/11/29 13:20:04.453 +08:00] [INFO] [coprocessor.go:812] [“[TIME_COP_PROCESS] resp_time:510.59763ms txnStartTS:445968601029017656 region_id:299549583 store_addr:192.168.80.212:20161 kv_process_ms:512 kv_wait_ms:0 kv_read_ms:448 processed_versions:523515 total_versions:523516 rocksdb_delete_skipped_count:0 rocksdb_key_skipped_count:523515 rocksdb_cache_hit_count:1492 rocksdb_read_count:0 rocksdb_read_byte:0”] [conn=27]
^C


[Resource Monitoring Status]


The server 192.168.90.203 had high memory and CPU usage in the morning and was unable to connect to the server. After restarting this tidb-server from the control machine, the CPU and memory usage decreased.

| username: 随缘天空 | Original post link

The startup times of your two TiDB servers are different. Did you change any configuration on 203 that caused it to be inconsistent with 204?

| username: 普罗米修斯 | Original post link

Resource usage was high in the morning, so I restarted without changing the configuration.

| username: chenhanneu | Original post link

When testing BR recovery, there was a period during which the BR logs were frantically printing logs that were basically the same as the TiDB logs mentioned above. It is unclear what operation is being performed during this process.

| username: 普罗米修斯 | Original post link

Yesterday, two TiKV nodes were taken offline. It seems that the replicas have been replenished. I am not sure if there are any other operations needed now.

| username: TiDBer_小阿飞 | Original post link

If it really doesn’t work, change the log level to prevent the logs from overwhelming the system. After identifying the cause, switch it back.

| username: 小龙虾爱大龙虾 | Original post link

There is a slow SQL executing a relatively heavy cop task, resulting in so many logs.

| username: 普罗米修斯 | Original post link

I saw the SQL queries that are continuously being executed in the dashboard’s slow query section.


| username: 小龙虾爱大龙虾 | Original post link

Single table query, the result set might still be quite large :thinking:, you know what to do, right? :joy: Contact the business development team.

| username: 普罗米修斯 | Original post link

How to change the TiDB log to level 5; I want to change it to warning level, but after editing and reloading, it prompts that this is not the correct configuration. I couldn’t find any relevant information after searching around.
Image

| username: 普罗米修斯 | Original post link

I first adjusted the log level and then asked the business to modify this SQL.

| username: h5n1 | Original post link

log.level

| username: TiDBer_小阿飞 | Original post link

-L

| username: DBRE | Original post link

Execute show analyze status; on the abnormal TiDB node to see if there is an ongoing analyze process. Such logs may appear during the analyze process.

| username: 普罗米修斯 | Original post link

It worked. After changing to log.level, reloading was successful.

| username: 普罗米修斯 | Original post link

No analyze analysis

| username: TI表弟 | Original post link

You can modify the log.level parameter through tiup edit-config.

| username: TiDBer_gxUpi9Ct | Original post link

You can modify the parameters.

| username: dba远航 | Original post link

There are large transactions, coupled with the low log level causing it.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.