
Once it's in sync with the cluster, it becomes operational. When the joiner completes the SST, it begins to process the write-sets that came in during the transfer. In an SST, the donor uses a backup solution, like MariaDB Enterprise Backup, to copy its data directory to the joiner. When the donor does not have enough write-sets cached for an IST, it runs an SST operation. The joiner applies these write-sets following the global ordering to bring its local databases into sync with the cluster. If the donor node has the intervening write-sets, it performs an IST operation, where the donor node only sends the missing write-sets to the joiner. When the donor node receives a state transfer request, it checks its write-set cache (that is, the GCache) to see if it has enough saved write-sets to bring the joiner into sync. In a state transfer, the node connects to another node in the cluster and attempts to bring its local database back in sync with the cluster. Whatever the cause, when a node finds that it has fallen too far behind the cluster, it attempts to initiate a state transfer. This can occur due to expensive operations being issued to it or due to network connectivity issues that lead to write-sets backing up in the queue. Using an odd number above three reduces the risk of a split-brain situation (that is, a case where two separate groups of Servers believe that they are part of the Primary Component and remain operational).įrom time to time a node can fall behind the cluster. A minimum of three in use means that a single Server or switch can fail without taking down the cluster. In case of planning Servers to the switch, switches to the data center, and data centers in the cluster, this model helps preserve the Primary Component. This can be another cluster node or a separate Replica Server kept in sync using MariaDB Replication.


In a cluster that spans multiple data centers, use an odd number of data centers above three.Įach data center in use should have at least one Server dedicated to backup operations. In a cluster that spans multiple switches, each data center in use should have an odd number of switches above three. The upper storage limit for MariaDB Enterprise Cluster is that of the smallest disk in use.Įach switch in use should have an odd number of Servers above three.

Ensuring that it has enough disk space and that it is able to maintain a Primary Component in the event of outages.Įach Server requires the minimum amount of disk space needed to store the entire database. In planning the number of systems to provision for MariaDB Enterprise Cluster, it is important to keep cluster operation in mind. Once the cluster is online, the Primary Component is any combination of Servers which includes a minimum of more than half the total number of Servers.Ī Server or group of Servers that loses network connectivity with the majority of the cluster becomes non-operational. When a Server establishes network connectivity with the Primary Component, it synchronizes its local database with that of the clusterĪs a member of the Primary Component, the Server becomes operational - able to accept read and write queries from clientsĭuring startup, the Primary Component is the Server bootstrapped to run as the Primary Component.

Groups of connected Servers form a component When MariaDB Enterprise Servers start in a cluster:Įach Server attempts to establish network connectivity with the other Servers in the cluster Writes made to any node in this cluster replicate to all the other nodes of the cluster. MaxScale then routes statements to one of the MariaDB Enterprise Servers in the cluster. The application establishes a client connection to MariaDB MaxScale. The Servers are configured to use multi-primary replication to maintain consistency between themselves while MariaDB MaxScale routes reads and writes between them. MariaDB Enterprise Cluster architecture involves deploying MariaDB MaxScale with multiple instances of MariaDB Enterprise Server. There are a few things to consider when planning the hardware, virtual machines, or containers for MariaDB Enterprise Cluster.
