How well does TiDB deal with 300k tables?

Application environment:

PoC on single server

TiDB version:

7.0

Reproduction method:

N/A

Problem:

We’re in the early stages of researching and testing TiDB and are very happy with what we see so far.

We’re currently using MariaDB 10.5 and have around 300k tables, which MariaDB struggles with. Even simple queries often stay in “Opening tables” for 5 - 20 seconds when that table hasn’t been accessed in a while. Our application writes about 40 GB of data per hour and has a lot of reads.

How will TiDB cope with 300k tables?

Resource allocation:

Our production setup would be dedicated servers with at least the specs listed in the “Hardware and software requirements” page, and enterprise grade NVME SSDs.

Attachment:

Having 300k tables in the database may cause the metadata of each DDL load in TiDB server to become increasingly larger, which could potentially slow down future DDL operations. However, it won’t have a significant impact on DML operations.

TiDB should be able to handle 300k tables, but the performance may depend on the size of the data and the number of Regions. It is recommended to test TiDB with your specific workload and data size to determine its performance.

TiKV doesn’t use the same file-per-table layout that InnoDB usually uses. This makes it much more scalable for many tables. And with 40 GB per hour a distributed database is probably going to work a lot better.

Please feel free to reach out to our sales organization to discuss the possibilities of a shared PoC where we can work together to ensure you follow the best practices etc.

Thank you for the prompt replies. That answers my question. I’m working to get approval for a full PoC and will reach out at that time.