Single Table Query Error Due to Mismatched Data Types Between Fields and Condition Values

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 单表查询 字段与条件的值 数据类型不同 查询出错

| username: TiDBer_QXegKncF

【TiDB Usage Environment】Production Environment / Testing / Poc
【TiDB Version】5.4
【Reproduction Path】Single table query error
【Encountered Problem: Problem Phenomenon and Impact】

SQL as follows:
select * from fac_app_data_table_field_val where table_id = 1969241186793349598 and val0 = 1983480395684134919

Query result

Querying for records where the val0 field is 1983480395684134919, but it returns data where val0 is 1983480395684134999.

Business requirement: val0 must be set to varchar type.

【Resource Configuration】Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
【Attachments: Screenshots/Logs/Monitoring】

| username: zhaokede | Original post link

val0 is of varchar type, and the condition does not include single quotes. Try adding them.

| username: TiDBer_QXegKncF | Original post link

When performing join queries, you can’t always rely on syntax conversion, and complex SQL conversions can also cause issues. You can take a look at my other question.

| username: 啦啦啦啦啦 | Original post link

Try to avoid implicit conversions by the database as much as possible. You can check the execution plan.

| username: dba远航 | Original post link

The value of val0 should be enclosed in quotes to avoid type conversion.

| username: TiDBer_QXegKncF | Original post link

I also found that complex SQL implicit conversion can cause problems, but our project needs to do this, and it is already mature. Is there any good way to solve it?

| username: db_user | Original post link

What is the table structure, and what are the data types of the fields? Can you show them? If it involves implicit conversion, you can use cast to solve it. It seems that your value is already close to the maximum value of bigint, which might cause problems.

| username: 啦啦啦啦啦 | Original post link

Yes, I remember MySQL has the same issue. Converting VARCHAR to BIGINT will result in precision loss. If the field type cannot be changed, you can only modify the code, either by adding quotes or using the CAST function.

| username: zhanggame1 | Original post link

Loss of precision with bigint has been mentioned before in the forum, where two different numbers were judged as equal and returned true.
What are the types of table_id and val0?

| username: tidb菜鸟一只 | Original post link

No, you can’t just change the table structure directly, changing the original bigint to varchar…

| username: TiDBer_QXegKncF | Original post link

This field can only be a string.

| username: forever | Original post link

Is the other related field a bigint?

| username: TiDBer_QXegKncF | Original post link

CREATE TABLE fac_app_data_table_field_val (
id bigint(20) NOT NULL,
school_id int(11) NOT NULL COMMENT ‘School ID’,
table_id bigint(20) NOT NULL COMMENT ‘Data Table ID’,
batch_code varchar(50) NOT NULL COMMENT ‘Submission Batch Code’,
user_id bigint(20) NOT NULL COMMENT ‘Owner ID’,
submit_id bigint(20) NOT NULL COMMENT ‘Submitter ID’,
user_type varchar(50) NOT NULL COMMENT ‘Owner Role’,
submit_type varchar(50) NOT NULL COMMENT ‘Submitter Role’,
source int(11) NOT NULL COMMENT ‘Data Source, 1: Mobile, 2: Batch Import’,
sync_type int(11) DEFAULT NULL COMMENT ‘Sync Data Type, 0: Not Synced, 1: Synced, 2: Syncing, 3: Abnormal Data’,
sync_count int(11) DEFAULT NULL COMMENT ‘Sync Data Type, 0: Not Synced, 1: Synced, 2: Syncing, 3: Abnormal Data’,
flg varchar(20) NOT NULL DEFAULT ‘Y’ COMMENT ‘Deleted or Not’,
ctime datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘Creation Time’,
mtime datetime DEFAULT NULL COMMENT ‘Update Time’,
create_user bigint(20) NOT NULL DEFAULT ‘-2’ COMMENT ‘Created By’,
update_user bigint(20) DEFAULT NULL COMMENT ‘Updated By’,
val0 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val1 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val2 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val3 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val4 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val5 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val6 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val7 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val8 text DEFAULT NULL COMMENT ‘Data Table Field Value’,
val9 text DEFAULT NULL COMMENT ‘Data Table Field Value’,

| username: TiDBer_QXegKncF | Original post link

Yes, bigint = varchar is associated in this way.

| username: db_user | Original post link

This should be considered an expected phenomenon and is consistent with MySQL behavior. The issue on GitHub indicates that the official team does not regard this behavior as a bug. You might want to try using the CAST function for conversion. Problems may arise when the value exceeds 18 digits. I have encountered similar issues before.

CAST conversion cannot properly use indexes - :sun_with_face: Suggestions and Feedback / Product Defects - TiDB Q&A Community (asktug.com)

| username: 双开门变频冰箱 | Original post link

Try adding quotation marks.

| username: kkpeter | Original post link

Try to modify the code to add quotes, everything else is unreliable.

| username: Jellybean | Original post link

There is an implicit conversion phenomenon when the field type and value type are different. From the perspectives of design, maintenance, and performance, it should be avoided.

It is recommended to either adjust the code or adjust the table structure type.

| username: 哈喽沃德 | Original post link

The text needs to be converted to char, right?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.