How to Run SQL in TiDB Without Errors?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 如何让tidb运行sql不报错?

| username: tianjun

[TiDB Usage Environment] Production Environment
[TiDB Version] 7.1.0
[Reproduction Path] Operations performed that led to the issue
Transferring 1.9 million (150MB) records from Oracle to TiDB using Navicat. After running for a while, it reports insufficient memory. I modified the parameters and adjusted the memory to 100GB, but it still reports an error. The Windows server has 128GB of memory, and I set tidb_mem_quota_query = 100. How can I prevent TiDB from reporting errors? On this machine, I also have MySQL installed, and transferring data to MySQL and Oracle works fine, but importing to TiDB reports an error. My biggest impression of TiDB is that it easily reports errors. Can TiDB be configured to be as stable as Oracle without frequent errors?
The specific error is:
Your query has been cancelled due to exceeding the allowed memory limit for a single SQL query. Please try narrowing your query scope or increase the tidb_mem_quota_query limit.

[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]
image

| username: 芮芮是产品 | Original post link

There is too little information. Windows cannot run TiDB. Please post the architecture diagram.

| username: Jellybean | Original post link

Your query has exceeded the maximum available memory limit for a single SQL. To ensure system stability and avoid OOM, TiDB imposes a limit on the maximum available memory for a single SQL.

The error message is clear: either add conditions to reduce the amount of data queried, or increase this limit value, which can be directly adjusted using SQL.

| username: zhanggame1 | Original post link

Set global tidb_mem_quota_query limit = 10 << 30;
Change it globally.

| username: zhanggame1 | Original post link

“Use transactions” cannot be selected, large data volumes will cause OOM.

| username: 像风一样的男子 | Original post link

TiDB by default limits the maximum memory usage of a single session to 1GB.

| username: ti-tiger | Original post link

The OOM issue after TiDB 6.5 has been greatly improved, but when encountering a large number of sudden SQL performance problems, it may still affect the business. At this time, you can combine SQL Binding and resource groups to temporarily limit the resource consumption of a specific SQL.

Take a simple Cartesian join SQL as an example:

  1. Execute SQL directly

MySQL [cctest1]> select * from sbtest1 join sbtest2; ERROR 1105 (HY000): probeWorker[2] meets error: Your query has been cancelled due to exceeding the allowed memory limit for a single SQL query. Please try narrowing your query scope or increase the tidb_mem_quota_query limit and try again.[conn=7537998832357871515]

After waiting for a while, the SQL exceeds the tidb_mem_quota_query limit and is terminated.

  1. Create a resource group with a very small RU

CREATE RESOURCE GROUP IF NOT EXISTS cc3 RU_PER_SEC = 1;

  1. Create SQL Binding

CREATE GLOBAL BINDING FOR SELECT * FROM sbtest1,sbtest2 USING SELECT /*+ RESOURCE_GROUP(cc3) */ * FROM sbtest1,sbtest2;

  1. Create a new SESSION to execute SQL

MySQL [cctest1]> select * from sbtest1,sbtest2; ERROR 8252 (HY000): Exceeded resource group quota limitation

The SQL is terminated due to the resource group limitation, achieving the purpose of temporarily limiting the resource consumption of a specific SQL.

| username: dba远航 | Original post link

When migrating with Navicat, small tables can be migrated directly, while large tables need to be segmented and submitted with condition filtering. This way, there won’t be any issues. I have used Navicat to migrate various domestic databases and have encountered all kinds of situations.

| username: Kongdom | Original post link

I think it should be what the previous poster said, caused by checking the use of transactions. Try removing it.

| username: zhanggame1 | Original post link

Uncheck that transaction option, and you won’t have any issues transferring hundreds of millions of records at once. I tested transferring data from Oracle to TiDB, and 120 million records were completed in an hour and a half.

| username: TiDBer_小阿飞 | Original post link

Is Navicat proficient to this extent? :joy:

| username: zhanggame1 | Original post link

I have been researching for a few days and am looking for a solution to migrate data from Oracle to TiDB. I found that the best tool is Navicat’s data transfer, which is very fast because it supports multi-table concurrency. In my tests, it can transfer 20,000 records per second.

| username: 像风一样的男子 | Original post link

If the data volume is large, will exporting SQL from Oracle and then executing it in TiDB be faster?

| username: zhanggame1 | Original post link

No, exporting to CSV didn’t transfer quickly at all.

| username: 小龙虾爱大龙虾 | Original post link

No way, CSV should be faster :joy:

| username: zhanggame1 | Original post link

Do you have any methods to quickly export a large number of CSV files? I haven’t found a suitable tool.

| username: 像风一样的男子 | Original post link

You should use expdp to export locally on the server, it will definitely be faster than Navicat.

| username: zhanggame1 | Original post link

Can expdp export to CSV? I need to migrate to TiDB.

| username: 小龙虾爱大龙虾 | Original post link

Export CSV from Oracle using sqluldr2

| username: forever | Original post link

You can also use DataX or Kettle, which are very fast without landing.