Currently, user-provided auto ID value is only supported in update mode

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: currently user provided auto id value is only supported in update mode

| username: TIDB救我狗命

[TiDB Usage Environment] Production Environment / Testing / Poc
[TiDB Version] 6.1.0
[Reproduction Path] What operations were performed that caused the issue
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]

TiSpark version is
image

Source code is:

Is there any solution? The business is just importing data into TiDB.

| username: TIDB救我狗命 | Original post link

This is my import code.

| username: 数据小黑 | Original post link

Does your target table contain an auto-increment field? Currently, TiSpark may not support writing values to auto-increment fields. The solution is to avoid auto-increment fields when creating the table, or use the update mode to write, avoiding updating the auto-increment field. From a design perspective, importing a table with an auto-increment field as the target table, this field should be meaningless.

| username: TIDB救我狗命 | Original post link

Yes, the target table contains an auto-increment field. Indeed, the auto-increment field doesn’t seem to be of much use. I’ll try deleting it and see.

| username: wfxxh | Original post link

So why do you have to use TiSpark to write? :joy:

| username: TIDB救我狗命 | Original post link

I haven’t compared the performance of JDBC and TiSpark before. My current solution uses Spark JDBC, and I want to see if TiSpark would be more efficient.

| username: TIDB救我狗命 | Original post link

Another main reason is that the current Spark JDBC does not seem to support replace. In this case, I need to delete the old data first before inserting the new data. The process of deleting the old data takes quite a long time…

| username: wfxxh | Original post link

The prerequisite for using replace is that your table has a PRIMARY KEY or UNIQUE KEY. You can customize the JDBC write method, which is quite easy. It seems there are also open-source implementations available on GitHub.

| username: TIDB救我狗命 | Original post link

Since every table I have has a PRIMARY KEY, I have always wanted to eliminate the delete operation process.

| username: TIDB救我狗命 | Original post link

I’ll also try implementing it using Spark JDBC custom write to see how it works.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.