What is the difference between TiKV's Block Cache and TiDB's MemBuffer when both cache business data during RocksDB queries?

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: rocksdb查询时TiKV的Block Cache和TiDB 的Membuffer的都缓存业务数据,有什么区别?

| username: alfred

【TiDB Usage Environment】Production\Test Environment\POC
【TiDB Version】
【Encountered Issues】
【Reproduction Path】What operations were performed that led to the issue
【Issue Phenomenon and Impact】
【TiDB Operator Version】:
【K8s Version】:

| username: forever | Original post link

Block cache is mainly used for reading, similar to the buffer cache in a regular database but read-only. Membuffer is primarily used for writing in the LSM tree storage structure; data for insertions, deletions, and updates are written into mem. Once full, it is stored in a file and can also be used for reading during data queries.

| username: alfred | Original post link

It seems that the course mentioned that the membuffer of the TiDB Server will cache some query results, as well as statistical information and the like. The Block cache seems to be the cache on the TiKV Server, which also caches some recently queried data.

| username: h5n1 | Original post link

Blockcache caches data blocks read from SST files, while writebuffer is where RocksDB caches writes, both are append-only. Membuffer caches writes to TiDB; all application writes are first written here and only written to TiKV, i.e., RocksDB’s writebuffer, during commit.

| username: alfred | Original post link

Is blockcache also a component of RocksDB? Does that mean a TiKV node mainly refers to RocksDB (2 instances)? What is the architecture of RocksDB like?

| username: h5n1 | Original post link

Yes, TiKV uses two RocksDB instances: one for storing Raft logs and the other for storing lock information and actual data. You can refer to the official documentation for the structure of RocksDB: Home · facebook/rocksdb Wiki · GitHub

| username: alfred | Original post link

Thank you, I’ll take a look.