Inconsistent Query Results with the Same Query Conditions

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 相同查询条件查询结果不一致

| username: TiDBer_m1X2pYSj

【TiDB Usage Environment】Production Environment
【TiDB Version】6.5.0
【Reproduction Path】Result Query
【Encountered Problem: Problem Phenomenon and Impact】Inaccurate result set for the same query conditions:
Download reconciliation statement, query parameters: {“beginPaidTime”:“2023-08-23 00:00:00”,“endPaidTime”:“2023-08-23 23:59:59”}
Download reconciliation statement, number of results returned by the database: 102

Download reconciliation statement, query parameters: {“beginPaidTime”:“2023-08-23 00:00:00”,“endPaidTime”:“2023-08-23 23:59:59”}
Download reconciliation statement, number of results returned by the database: 122

Querying yesterday’s historical data at 8 AM today, the number of results returned by the first query is random.

【Resource Configuration】Enter TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
【Attachment: Screenshot/Log/Monitoring】

| username: zhanggame1 | Original post link

Can you send the SQL? Are you sure the SQL executed each time is the same?

| username: Kongdom | Original post link

It depends on whether the business allows the insertion of historical data. In our case, the business allows for order replenishment. :yum:

| username: tidb菜鸟一只 | Original post link

It’s hard to see where the problem is without the SQL.

| username: TiDBer_m1X2pYSj | Original post link

Thank you everyone above. Yesterday, I investigated the issue and found that it might be because two TiKV instances were down, leaving only one instance running. Now that I’ve started the other two instances, the data seems fine today. I’ll continue to monitor the situation.

| username: wakaka | Original post link

If 3 TiKVs are down and 2 TiKVs are still running, can the cluster still function without errors?

| username: zhanggame1 | Original post link

Impossible, if 2 out of 3 TiKV nodes fail, it will no longer provide service.

| username: Kongdom | Original post link

:thinking: I suggest checking the replica distribution. Could it be that some nodes don’t have leader replicas?

| username: TiDBer_m1X2pYSj | Original post link

Operational error during k8s deployment changed the number of instances to 1.

| username: TiDBer_m1X2pYSj | Original post link

Operational error in k8s deployment changed the number of instances to 1.

| username: Kongdom | Original post link

:joy: Uh… This is beyond my knowledge. Is k8s really that popular?
Is changing the number of instances to 1 the same as scaling down to one node?

| username: cy6301567 | Original post link

We have three KVs, and if the memory is about to explode, we will restart them together.

| username: zhanggame1 | Original post link

What are the benefits of running a database on a Kubernetes cluster?

| username: 大飞哥online | Original post link

It’s more convenient to destroy and run away, haha.

| username: Kongdom | Original post link

Ah, this… I never expected this :joy:

| username: redgame | Original post link

Learned something new, impressive.