Hi,
We have TiDB Operator 1.4.4 deployed with empty values.yml (use all default values.yaml from Helm chart) which means that custom tidb scheduler is enabled and deployed. Is it safe to update such deployment of Operator that already manages active TidbCluster(s) with the following values.yaml:
scheduler:
create: false
I don’t mind if TidbCluster(s) are restarted. I would mind if I lost databases and data.
In general, is it known and/or documented what reconfigurations of TiDB Operator are safe to perform after it already manages TidbClusters?
According to the official documentation, TiDB Operator supports upgrading the tidb-scheduler component, which is optional to use. However, it is not recommended to modify the tidb-controller-manager and admission-webhook components after the TiDB Operator has been deployed and is managing TidbClusters.
Regarding your specific question, if you set scheduler.create to false in the values.yaml file and then upgrade the TiDB Operator, it may cause the tidb-scheduler component to be deleted and recreated during the upgrade process. This may cause the TidbCluster(s) to be restarted, but it should not result in any data loss.
However, it is always recommended to perform a backup of your data before making any changes to your TiDB deployment, just in case.
if you do not use tidb scheduler in the tc, means tc.spec.schedulerName is not set (or set to default-scheduler in newly template yaml) , it will not impact current tc.
if you are using tidb scheduler in the tc, means tc.spec.schedulerName is tidb-scheduler, you need change it to default-scheduler first (which will cause tidb cluster rolling update and reschedule every pod based on k8s scheduler, usually it’s the same) , see doc TiDB Scheduler | PingCAP Docs
rolling update only restart pod (let it schedule again based on new k8s scheduler), related pvc and pv is not changed, so no data lost.