[Resource Download] 2024 Rare Earth Developers Conference | Next Generation RAG: Enhancing RAG Capabilities with tidb.ai Knowledge Graph

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 【资料下载】2024 稀土开发者大会 | 下一代 RAG:tidb.ai 使用知识图谱增强 RAG 能力

| username: YY-ha

On June 28-29, the 2024 Xitu Developer Conference was successfully held in Beijing. TiDB ecosystem architect Wang Qizhi was invited to participate and shared a presentation titled “Next-Generation RAG: Enhancing RAG Capabilities with Knowledge Graphs using tidb.ai” in the RAG and Vector Search sub-forum.

With the popularity of Chat GPT, LLMs (Large Language Models) have once again come into the spotlight. However, when dealing with specific domain queries, the content generated by large models often suffers from information lag and inaccuracy. At this point, RAG (Retrieval-Augmented Generation) technology acts like a “plugin,” dynamically providing information to LLMs by integrating external knowledge bases, thus addressing the issues of static training data and information accuracy. Vector search, as a key component of RAG, uses high-dimensional data vector representations for efficient similarity searches, making information retrieval more precise and faster.

In deeper business scenarios, the application of RAG technology faces a series of challenges from document parsing, data fusion, index creation, vector databases, to hybrid retrieval and reranking. Vector search also faces challenges in terms of time consumption, recall accuracy, and storage costs. How can RAG and vector search technologies better meet enterprise needs when implemented? How can we build a full-chain RAG service on top of vectors to improve developer efficiency and reduce costs? These are the questions this sub-forum at the Xitu Developer Conference aimed to explore with everyone.

Download Area

Enhancing RAG Capabilities with Knowledge Graphs using tidb.ai.pdf (15.8 MB)

Highlights

Before formally introducing how TiDB uses graphs to enhance RAG capabilities, let’s briefly introduce tidb.ai. Tidb.ai is an AI Q&A bot designed to address technical questions raised by TiDB community users, improving response speed and user experience. Previously, tidb.ai faced some challenges:

  • Community users had to wait for technical support engineers to answer questions, which could be time-consuming and inefficient.
  • Although TiDB documentation is rich, users found it difficult to quickly gain comprehensive knowledge.

Behind tidb.ai is a simple RAG implementation.

Starting from the top left, suppose you have a bunch of texts. You need to first split them into individual text blocks, then feed them into an embedding model to generate embedding vectors, and finally store them along with the text blocks in TiDB Serverless. This process is called Indexing in RAG terminology. Once you have stored everything, indexing is complete.

Then the application can be used. When a user asks, “What is TiDB?”, we need to feed this question into the embedding model to generate an embedding vector. Using this vector, we employ a built-in function Vec_Cosine_Distance in TiDB Serverless to calculate the cosine distance between two vectors. We use this result to compare and retrieve the top N most similar texts, and finally use a large model to generate a response.

Vector within TiDB > TiDB + Vector Database

Here, we also hope to explain why having vectors within TiDB is better than adding an extra vector database to TiDB.

First, let’s look at the simplified architecture of TiDB Serverless.

Why use TiDB Serverless? When you use TiDB Serverless, your service architecture becomes simpler. You don’t need to maintain two sets of data in different databases, not to mention the issue of transactional synchronization, which sometimes can’t even achieve eventual consistency. Moreover, you need to spend extra effort maintaining two sets of data, which is a significant burden when writing programs. Additionally, in the MySQL ecosystem, MySQL users have long envied pg_vector, but there hasn’t been anything comparable until TiDB Serverless appeared, allowing us to use vectors directly in the MySQL ecosystem.

Beyond syntax, we are better than pg_vector because:

  • We don’t limit the amount of data you can store; you can store as much as you want without sharding.
  • You don’t need to set up an additional analytical database.
  • You don’t need to worry about availability issues; we are a distributed database.
  • You don’t need to bear the burden of a more complex application architecture.

TiDB Serverless is an all-in-one database that can help developers reduce their burden. On one hand, it reduces the mental burden on developers by eliminating the need to maintain a complex architecture. On the other hand, it helps developers reduce costs in terms of budget.

As Teacher Qizhi said, our AI will be the sky we can look up to in the future, but the database is the soil beneath our feet. Only by standing firm can we see further.