Memory Pessimistic Lock Not Lost in Version 6.1.0

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 6.1.0 内存悲观锁不丢失

| username: h5n1

[Version] 6.1.0 arm
[Steps]
Create t1 (id int primary key, name varchar);
insert into t1 values(1, “xiaohong”);

Create t2 (id int primary key, name varchar);
insert into t2 values(100, “hha”);

Find the store where the region leader of table t1’s data is located, assuming tikv1.

tx1, begin;
tx1: update t1 set name = “xiaoming” where name = “xiaohong”; Let tx1 hold the in-memory pessimistic lock.

tx2: session 2 executes update t1 set name = “xiaohua” where name = “xiaohong”; It will be blocked.

Kill tikv1; Let tx1’s lock be lost.
Wait for a certain period, session 2 executes successfully.
txn3: session3 executes update t2 set name = “xxxx” where name=“hha”; Successfully committed.

tx1: Execute update t2 set name = “yyy”; Refresh tx1’s forupdatets to make it > tx2’s commitTs.

tx1 executes commit, check if it can commit successfully.

[Result]

  1. Session 1 updates t1 and finds the leader.

  2. Session 2 updates the same row, blocked.
    b58e925e8d2c673a6a06fa94b223192

  3. Kill tikv on the leader.

After killing the leader tikv, the in-memory lock is not lost, and session 2 remains blocked.

Parameter configuration:

System view:

  1. After session 1 commits, session 2 executes successfully.
    image
    image
    The leader is now store 2.

    [Questions]
  2. The in-memory pessimistic lock is not released or lost after killing the leader tikv node (it seems to be synchronized and switched to store 2), and the blocked session remains blocked, which does not meet expectations (should not have killed the wrong leader).
  3. Is there a more intuitive way to see the current lock holder and related information?
| username: Billmay表妹 | Original post link

This has also been brought up at the moderator exchange meeting~

| username: h5n1 | Original post link

It’s best to have a senior developer try it first.

| username: Billmay表妹 | Original post link

Reading it~ hhhh

| username: sticnarf | Original post link

  1. Adding a pessimistic lock is indeed purely an in-memory operation. However, if the lock is not committed within a short period, the background will continuously initiate the TxnHeartBeat operation to update the TTL of the existing lock, preventing it from being cleared.

TxnHeartBeat updates the TTL without memory optimization and still goes through Raft and persistence. So, when you find that the lock has been synchronized, it is likely because TxnHeartBeat wrote the persistent pessimistic lock.

  1. Is there a more intuitive way to see the current lock holder and related information?

If the lock is an in-memory lock, it will definitely only be on the leader. However, there is no good way to see whether a particular lock has been persisted.

| username: h5n1 | Original post link

Thank you, master. How often is the TTL updated? In other words, is the lock information persisted during the TTL update? How should this be handled later?

| username: sticnarf | Original post link

Yes, TTL persists the lock information. The TTL is updated every 10 seconds. This method probably won’t change because for a transaction that needs to update the TTL and has a longer lifespan, persisting the lock should reduce the probability of loss, which should be better. After all, there shouldn’t be many such transactions, so in most cases, there won’t be any efficiency sacrifices.

| username: xfworld | Original post link

So complicated… :custard:

| username: h5n1 | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.