Implementation Details of MPP Communication Between TiDB and TiFlash

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb和tiflash就MPP通信具体实现

| username: atidat

I spent the whole afternoon trying to understand where the code for TiDB’s RPC communication with TiFlash under MPP is located. Starting from the entry point DispatchMPPTasks and following through to HandleMPPDAGReq:

func HandleMPPDAGReq(dbReader *dbreader.DBReader, req *coprocessor.Request, mppCtx *MPPCtx) *coprocessor.Response {
    dagReq := new(tipb.DAGRequest)
    err := proto.Unmarshal(req.Data, dagReq)
    if err != nil {
        return &coprocessor.Response{OtherError: err.Error()}
    }
    dagCtx := &dagContext{
        dbReader:  dbReader,
        startTS:   req.StartTs,
        keyRanges: req.Ranges,
    }
    builder := mppExecBuilder{
        dbReader: dbReader,
        mppCtx:   mppCtx,
        sc:       flagsToStatementContext(dagReq.Flags),
        dagReq:   dagReq,
        dagCtx:   dagCtx,
    }
    mppExec, err := builder.buildMPPExecutor(dagReq.RootExecutor)
    if err != nil {
        panic("build error: " + err.Error())
    }
    err = mppExec.open()
    if err != nil {
        panic("open phase find error: " + err.Error())
    }
    _, err = mppExec.next()
    if err != nil {
        panic("running phase find error: " + err.Error())
    }
    return &coprocessor.Response{}
}

It seems that this is the lowest level of implementation. There is no apparent place where gRPC interfaces are called as I imagined.

| username: ti-tiger | Original post link

You can take a look at the code for mppCtx *MPPCtx in /store/mockstore/unistore/cophandler/cop_handler.go.

| username: system | Original post link

This topic will be automatically closed 60 days after the last reply. No new replies are allowed.