Developer Tier
The Developer Tier is an experimental and free cluster tier that provides 1 TiDB Server, 1 TiKV, and 1 TiFlash. An account can only create one Developer Tier cluster, which is valid for one year and needs to be recreated upon expiration.
The Developer Tier includes:
1 shared TiDB node
1 shared TiKV node (with 10GB of OLTP storage space)
1 shared TiFlash node (with 10GB of OLAP storage space)
This means…
Developer Tier clusters run on shared nodes
Shared nodes may reduce performance
One TiDB Cloud account can use one Developer Tier cluster, valid for one year
You can delete and recreate a cluster multiple times as needed
The one-year free trial period starts from the date of the first Developer Tier cluster
Other considerations:
Due to the node limitations of the Developer Tier, it cannot meet high availability requirements
VPC (Virtual Private Network) cannot be used
Cluster backups in the recycle bin are limited (one automatic backup per day and two manual backups)
Dedicated Tier
Dedicated for production use, with the advantages of cross-region high availability, horizontal scaling, and HTAP
Easily define the cluster size of TiDB, TiKV, and TiFlash according to business needs
For each TiKV node and TiFlash node, the data on the nodes will be replicated and distributed to different availability zones to achieve high availability
To create a Dedicated Tier cluster, you need to add a payment method or apply for a proof of concept (PoC) trial
A single TiKV is fine, refer to TiDB 数据库快速上手指南 | PingCAP 文档中心
I am running a single virtual machine deployment on my laptop.
1 PD, 1 TiKV, 1 TiDB, just change the deployment configuration file like this. The memory should preferably not be less than 10G.
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
instance.tidb_slow_log_threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
pd_servers:
- host: 10.0.1.1
tidb_servers:
- host: 10.0.1.1
tikv_servers:
- host: 10.0.1.1
port: 20160
status_port: 20180
config:
server.labels: { host: "logic-host-1" }
monitoring_servers:
- host: 10.0.1.1
grafana_servers:
- host: 10.0.1.1
You misunderstood. When I mentioned a single node, I was referring to the number of deployments for each component. If you deploy one instance each of TiDB, TiKV, and PD, that’s a single node. Also, there’s no requirement for TiDB to have at least 7 nodes.