What is MySQL ReplicaSet ?

In MySQL, InnoDB ReplicaSet is a feature that allows you to set up a traditional replication environments . It enables asynchronous GTID-based replication similar to InnoDB Cluster without automatic failover component.

Here are some key features of InnoDB ReplicaSet:

  • InnoDB ReplicaSet consists of a primary server and multiple secondary servers (replicas), forming a replication topology.
  • It provides a way to administer and manage the replication setup using the AdminAPI, which offers operations for monitoring the ReplicaSet’s status and performing manual failover.
  • InnoDB ReplicaSet can be used in scenarios where high availability is not the primary requirement but scaling out reads and enabling automated manual failover is enough.
  • MySQL Router, a component of MySQL, supports automatic configuration against InnoDB ReplicaSet, making it easy to set up replication and routing.
  • It is also possible to adopt an existing replication setup and configure it as an InnoDB ReplicaSet using the AdminAPI, without needing to create a new ReplicaSet from scratch.
  • InnoDB ReplicaSet can be used over a Wide Area Network (WAN) without impacting write performance, as the replication channels work asynchronously. However, replication lag may be more significant over a WAN.

In summary, InnoDB ReplicaSet is a replication feature in MySQL that provides flexibility in setting up and managing replication configurations, offering scaling and failover capabilities while not emphasizing high availability as InnoDB Cluster does.

Environment Overview

We have two instances of “8.0.32-24-debug Percona Server (GPL), Release 24, Revision e5c6e9d2-debug” version. We will be setting up replication using replicaSet feature in these two instances. However there is no limitations on how many slaves it can handle.

Instances

172.31.X.XXX:3308
172.31.X.XXX:3318

Mandatory Configuration Needed for ReplicaSet

Only a GTID based environments can be created or adopted by this ReplicaSet and below are the mandatory settings

+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

MySQLShell Pre-checks

We have to validate and configure both the instances using dba.configureReplicaSetInstance() which would make the mandatory config changes and reboot mysql to make the config changes persistent and finally create the admin user for handling ReplicaSet admin operations

Configure both the nodes using dba.configureReplicaSetInstance()

dba.configureReplicaSetInstance utility validates both the instances and make necessary configuration changes to setup an innodb managed ReplicaSet and restarts the instance to make the config changes persists. In this process it also asks to create a cluster admin user which will be used to configure and administer the ReplicaSet.

shell.connect("genexrepladmin@172.31.X.XXX:3308");
dba.configureReplicaSetInstance()
shell.connect("genexrepladmin@172.31.X.XXX:3318");
dba.configureReplicaSetInstance()

Expected Output

 MySQL  localhost  JS > shell.connect("genexrepladmin@172.31.X.XXX:3308");
Creating a session to 'genexrepladmin@172.31.X.XXX:3308'
Please provide the password for 'genexrepladmin@172.31.X.XXX:3308': **************
Save password for 'genexrepladmin@172.31.3.190:3308'? [Y]es/[N]o/Ne[v]er (default No): Y
Fetching schema names for auto-completion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 24
Server version: 8.0.32-24-debug Percona Server (GPL), Release 24, Revision e5c6e9d2-debug
No default schema selected; type \use <schema> to set one.
<ClassicSession:genexrepladmin@172.31.X.XXX:3308>
 MySQL  172.31.X.XXX:3308 ssl  JS >
 MySQL  172.31.X.XXX:3308 ssl  JS > dba.configureReplicaSetInstance()
Configuring local MySQL instance listening at port 3318 for use in an InnoDB ReplicaSet...

This instance reports its own address as 172.31.3.190:3318

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB ReplicaSet with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: genexrepladmin
Password for new account: **************
Confirm password: **************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y
Cluster admin user 'genexrepladmin'@'%' created.
Configuring instance...
The instance '172.31.X.XXX:3318' was configured to be used in an InnoDB ReplicaSet.
Restarting MySQL...

Expected Failure

ERROR: Remote restart of MySQL server failed: MySQL Error 3707 (HY000): Restart server failed (mysqld is not managed by supervisor process).
Please restart MySQL manually (check https://dev.mysql.com/doc/refman/en/restart.html for more details).
Dba.configureReplicaSetInstance: Restart server failed (mysqld is not managed by supervisor process). (MYSQLSH 3707)

NOTE : This happened due to custom port mysql instances without being managed by supervisor process so configuring instances that are managed by systemctl demon won’t have this issue.

Ways of Setting up ReplicaSet

Now There are two ways of configuring ReplicaSet either you setup a complete setup from scratch or adopt an existing GTID based replication.

  • ReplicaSet from Scratch
  • Adopting an existing GTID based asynchronous Replication

ReplicaSet From Scratch

We have only two major steps to create the ReplicaSet from scratch as below. Make sure you validate the replicaSet status using rs.status() after each step so we are able to find issues in each step.

  • Connect to the master node and create the ReplicaSet
  • Check the ReplicaSet status and add the second node-2 to the ReplicaSet

Connect to the Master node and create ReplicaSet

 MySQL  172.31.X.XXX:3308 ssl  JS > var rs = dba.createReplicaSet("replicaSet1")
A new replicaset with instance '172.31.X.XXX:3308' will be created.

* Checking MySQL instance at 172.31.X.XXX:3308

This instance reports its own address as 172.31.X.XXX:3308
172.31.X.XXX:3308: Instance configuration is suitable.

* Updating metadata...

ReplicaSet object successfully created for 172.31.X.XXX:3308.
Use rs.addInstance() to add more asynchronously replicated instances to this replicaset and rs.status() to check its status.

NOTE: Upon creating the ReplicaSet it creates the mysql_innodb_cluster_metadata database on the primary node.

Check the ReplicaSet status after adding the first node-1

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.status()
{
    "replicaSet": {
        "name": "replicaSet1",
        "primary": "172.31.X.XXX:3308",
        "status": "AVAILABLE",
        "statusText": "All instances available.",
        "topology": {
            "172.31.X.XXX:3308": {
                "address": "172.31.X.XXX:3308",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            }
        },
        "type": "ASYNC"
    }
}

Let’s add the second node node-2 now

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.addInstance('genexrepladmin@172.31.X.XXX:3318');
Adding instance to the replicaset...

* Performing validation checks

This instance reports its own address as 172.31.X.XXX:3318
172.31.X.XXX:3318: Instance configuration is suitable.

* Checking async replication topology...

* Checking transaction state of the instance...

NOTE: The target instance '172.31.X.XXX:3318' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether replication can completely recover its state.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '172.31.3.190:3318' with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
* Updating topology
Waiting for clone process of the new member to complete. Press ^C to abort the operation.
* Waiting for clone to finish...
NOTE: 172.31.X.XXX:3318 is being cloned from 172.31.X.XXX:3308
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 172.31.X.XXX:3318 is shutting down...

* Waiting for server restart... timeout

Expected Failure

WARNING: Clone process appears to have finished and tried to restart the MySQL server, but it has not yet started back up.

Please make sure the MySQL server at '172.31.X.XXX:3318' is properly restarted. The operation will be reverted, but you may retry adding the instance after restarting it.
ERROR: Error adding instance to replicaset: MYSQLSH 51156: Timeout waiting for server to restart
Reverting topology changes...
ERROR: Error while reverting replication changes: MySQL Error 2013: Lost connection to MySQL server during query

Changes successfully reverted.
ERROR: 172.31.X.XXX:3318 could not be added to the replicaset
ReplicaSet.addInstance: Timeout waiting for server to restart (MYSQLSH 51156)

NOTE: If the AdminAPI is not able to restart the instances which are being configured these errors are expected to happen. Just repeat adding the second node again as below and it would succeed the second time.

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.addInstance('genexrepladmin@172.31.X.XXX:3318');
Adding instance to the replicaset...

* Performing validation checks

This instance reports its own address as 172.31.X.XXX:3318
172.31.X.XXX:3318: Instance configuration is suitable.

* Checking async replication topology...

* Checking transaction state of the instance...
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '172.31.X.XXX:3318' with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.

Incremental state recovery was selected because it seems to be safely usable.

* Updating topology
** Changing replication source of 172.31.X.XXX:3318 to 172.31.X.XXX:3308
** Waiting for new instance to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%

The instance '172.31.X.XXX:3318' was added to the replicaset and is replicating from 172.31.X.XXX:3308.

* Waiting for instance '172.31.X.XXX:3318' to synchronize the Metadata updates with the PRIMARY...
** Transactions replicated  ############################################################  100%

NOTE: Upon adding the secondary node to the replica set copies the mysql_innodb_cluster_metadata configurations and configures a asynchronous replication in node-2

Check the ReplicaSet status after adding the second node-2

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.status()
{
    "replicaSet": {
        "name": "replicaSet1",
        "primary": "172.31.X.XXX:3308",
        "status": "AVAILABLE",
        "statusText": "All instances available.",
        "topology": {
            "172.31.X.XXX:3308": {
                "address": "172.31.X.XXX:3308",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            },
            "172.31.X.XXX:3318": {
                "address": "172.31.X.XXX:3318",
                "instanceRole": "SECONDARY",
                "mode": "R/O",
                "replication": {
                    "applierStatus": "APPLIED_ALL",
                    "applierThreadState": "Waiting for an event from Coordinator",
                    "applierWorkerThreads": 4,
                    "receiverStatus": "ON",
                    "receiverThreadState": "Waiting for source to send event",
                    "replicationLag": null
                },
                "status": "ONLINE"
            }
        },
        "type": "ASYNC"
    }
}

Now you would see both the nodes in the configured replicaSet. and lets check the replication status

mysql> show replica status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for source to send event
                  Master_Host: 172.31.X.XXX
                  Master_User: mysql_innodb_rs_2
                  Master_Port: 3308
                Connect_Retry: 60
              Master_Log_File: binlog.000005
          Read_Master_Log_Pos: 159550
               Relay_Log_File: ip-172.31.X.XXX-relay-bin.000002
                Relay_Log_Pos: 3544
        Relay_Master_Log_File: binlog.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...........
          Exec_Master_Log_Pos: 159550
...........
        Seconds_Behind_Master: 0
...........
           Retrieved_Gtid_Set: 7258e8ad-e8ae-11ed-99d5-0aa992052ebc:262-266
            Executed_Gtid_Set: 7258e8ad-e8ae-11ed-99d5-0aa992052ebc:1-266
                Auto_Position: 1
...........

NOTE : Have listed the limited SHOW REPLICA STATUS.

Adopting an existing GTID based asynchronous Replication

 MySQL  172.31.X.XXX:3308 ssl  JS > rs = dba.createReplicaSet('replicasetAdopt', {'adoptFromAR':1})
A new replicaset with the topology visible from '172.31.X.XXX:3308' will be created.

* Scanning replication topology...
** Scanning state of instance 172.31.X.XXX:3308
** Scanning state of instance 172.31.X.XXX:3318

* Discovering async replication topology starting with 172.31.X.XXX:3308
Discovered topology:
- 172.31.X.XXX:3308: uuid=7258e8ad-e8ae-11ed-99d5-0aa992052ebc read_only=no
- 172.31.X.XXX:3318: uuid=4e02901b-e905-11ed-8309-0aa992052ebc read_only=no
    - replicates from 172.31.X.XXX:3308
	source="172.31.X.XXX:3308" channel= status=ON receiver=ON coordinator=ON applier0=ON applier1=ON applier2=ON applier3=ON

* Checking configuration of discovered instances...

This instance reports its own address as 172.31.X.XXX:3308
172.31.X.XXX:3308: Instance configuration is suitable.

This instance reports its own address as 172.31.X.XXX:3318
172.31.X.XXX:3318: Instance configuration is suitable.

* Checking discovered replication topology...
172.31.X.XXX:3308 detected as the PRIMARY.
Replication state of 172.31.X.XXX:3318 is OK.

Validations completed successfully.

* Updating metadata...

ReplicaSet object successfully created for 172.31.X.XXX:3308.
Use rs.addInstance() to add more asynchronously replicated instances to this replicaset and rs.status() to check its status.

<ReplicaSet:replicasetAdopt>

Expected Failure

If you try to adopt a bingo based replication to replicaSet it would fail with below error

 MySQL  172.31.X.XXX:3308 ssl  JS > dba.configureReplicaSetInstance()
Dba.configureReplicaSetInstance: This function is not available through a session to an instance belonging to an unmanaged asynchronous replication topology (RuntimeError)

If you try to adopt the replicaSet from a slave instance you would get below error

Validations completed successfully.

ERROR: Active connection must be to the PRIMARY when adopting an existing replication topology.
Dba.createReplicaSet: Target instance is not the PRIMARY (MYSQLSH 51313)

Failover scenarios

Lets see the failover scenarios which are possible here with ReplicaSet

  • Controlled failover as part of Planned maintenance
  • Forced failover as part of Unplanned Outage

Planned Maintenance activity can be also called as controlled failover where we do certain pre-checks as below and trigger a failover

  • application connectivity
  • user access validation
  • Replication lagg verification

Controlled failover

Lets try to perform a controlled failover from 172.31.X.XXX:3308 [source] to 172.31.X.XXX:3318[replica]

rs.setPrimaryInstance('genexrepladmin@172.31.X.XXX:3318')

Desired Output:

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.setPrimaryInstance('genexrepladmin@172.31.X.XXX:3318')
172.31.X.XXX:3318 will be promoted to PRIMARY of 'replicaSet1'.
The current PRIMARY is 172.31.X.XXX:3308.

* Connecting to replicaset instances
** Connecting to 172.31.X.XXX:3308
** Connecting to 172.31.X.XXX:3318
** Connecting to 172.31.X.XXX:3308
** Connecting to 172.31.X.XXX:3318

* Performing validation checks
** Checking async replication topology...
** Checking transaction state of the instance...

* Synchronizing transaction backlog at 172.31.X.XXX:3318
** Transactions replicated  ############################################################  100%

* Updating metadata

* Acquiring locks in replicaset instances
** Pre-synchronizing SECONDARIES
** Acquiring global lock at PRIMARY
** Acquiring global lock at SECONDARIES

* Updating replication topology
** Changing replication source of 172.31.X.XXX:3308 to 172.31.X.XXX:3318

172.31.X.XXX:3318 was promoted to PRIMARY.

Forced failover

Let’s do a forced failover from 172.31.X.XXX:3318 to 172.31.X.XXX:3308 for which we took mysql down at 3318 port which was the current primary. The sequence of failover has to be followed as below

  1. shell.connect(“genexrepladmin@$new_primary_instance”);
  2. rs=dba.getReplicaSet()
  3. rs.forcePrimaryInstance(‘genexrepladmin@$new_primary_instance’,{invalidateErrorInstances:true})
  4. rs.removeInstance(‘genexrepladmin@$old_primary_instance’,{force: true})
  5. rs.addInstance(‘genexrepladmin@$old_primary_instance’)

Sequence of a forced failover

Verify the status if your are not able to reach the existing primary

MySQL  172.31.X.XXX:3318 ssl  JS > rs.status()
ReplicaSet.status: Failed to execute query on Metadata server 172.31.X.XXX:3318: Lost connection to MySQL server during query (MySQL Error 2013)

Fetch the ReplicaSet information

 MySQL  172.31.X.XXX:3308 ssl  JS > rs=dba.getReplicaSet()
You are connected to a member of replicaset 'replicaSet1'.
<ReplicaSet:replicaSet1>

Force primary Failover to the node-1 which is available

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.forcePrimaryInstance('genexrepladmin@172.31.X.XXX:3308',{invalidateErrorInstances:true})
* Connecting to replicaset instances
** Connecting to 172.31.X.XXX:3308

* Waiting for all received transactions to be applied
172.31.X.XXX:3308 will be promoted to PRIMARY of the replicaset and the former PRIMARY will be invalidated.

* Checking status of last known PRIMARY
NOTE: 172.31.X.XXX:3318 is UNREACHABLE
* Checking status of promoted instance
NOTE: 172.31.X.XXX:3308 has status ERROR
* Checking transaction set status
* Promoting 172.31.X.XXX:3308 to a PRIMARY...

* Updating metadata...

172.31.X.XXX:3308 was force-promoted to PRIMARY.
NOTE: Former PRIMARY 172.31.X.XXX:3318 is now invalidated and must be removed from the replicaset.
* Updating source of remaining SECONDARY instances

Failover finished successfully.

Check the replicaSet status

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.status()
{
    "replicaSet": {
        "name": "replicaSet1",
        "primary": "172.31.X.XXX:3308",
        "status": "AVAILABLE_PARTIAL",
        "statusText": "The PRIMARY instance is available, but one or more SECONDARY instances are not.",
        "topology": {
            "172.31.X.XXX:3308": {
                "address": "172.31.X.XXX:3308",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            },
            "172.31.X.XXX:3318": {
                "address": "172.31.X.XXX:3318",
                "connectError": "Could not open connection to '172.31.X.XXX:3318': Can't connect to MySQL server on '172.31.X.XXX:3318' (111)",
                "fenced": null,
                "instanceRole": null,
                "mode": null,
                "status": "INVALIDATED"
            }
        },
        "type": "ASYNC"
    }
}

Remove the failed node from the replicaSet

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.removeInstance('genexrepladmin@172.31.X.XXX:3318',{force: true})
WARNING: Replication is not active in instance 172.31.X.XXX:3318.
NOTE: 172.31.X.XXX:3318 is invalidated, replication sync will be skipped.
The instance '172.31.X.XXX:3318' was removed from the replicaset.

Add the failed node back to the replicaSet once it becomes active

 MySQL  172.31.X.XXX:3308 ssl  JS > rs.addInstance('genexrepladmin@172.31.X.XXX:3318')
Adding instance to the replicaset...

* Performing validation checks

This instance reports its own address as 172.31.X.XXX:3318
172.31.X.XXX:3318: Instance configuration is suitable.

* Checking async replication topology...

* Checking transaction state of the instance...
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '172.31.X.XXX:3318' with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.

Incremental state recovery was selected because it seems to be safely usable.

* Updating topology
** Changing replication source of 172.31.X.XXX:3318 to 172.31.X.XXX:3308
** Waiting for new instance to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%

The instance '172.31.X.XXX:3318' was added to the replicaset and is replicating from 172.31.X.XXX:3308.

* Waiting for instance '172.31.X.XXX:3318' to synchronize the Metadata updates with the PRIMARY...
** Transactions replicated  ############################################################  100%

 MySQL  172.31.X.XXX:3308 ssl  JS >

Drawbacks or Limitations

More details on limitations can be found here

  • InnoDB ReplicaSet is not recommended for high availability and it is advised to use InnoDB Cluster whenever possible.
  • A ReplicaSet consists of a single primary and multiple replicas.
  • Manual failover is required in case of a failover event.
  • Replication filters are not supported.
  • Partial data loss can occur for incomplete transactions during unexpected halts.
  • There is no multi-primary mode like InnoDB Cluster
  • Only instances with MySQL 8.0 with GTID-based RBR replication can be converted or deployed with ReplicaSet
  • Multi-channel replication and unmanaged replication channels are not supported.
  • MySQL Shell must be used for managing the ReplicaSet, including creating and managing the replication user.

Conclusion

There are some significant challenges with HA Setup and automatic failover with ReplicaSet it is not a par solution in compared to InnoDB Cluster, However it still helps the environments where InnoDB Cluster cannot be deployed like between different data centres etc.

Oracle MySQL has given a end to end solution for a classic time consuming activity of manually taking backup , restore and configuring a slave activity by this ReplicaSet solution with their clone plugin. So like I said MySQL Shell is a DBA’s assistant and its advanced interface does it all for you from scratch to setup the cluster/ReplicaSet and helps you manage it as well. There are some good benefits of ReplicaSet as well as below.

  • MySQLShell/Admin API makes is fairly very simple to configure or adopt replicaSet on an existing GTID based replication environment
  • one liner instance configurator – dba.configureReplicaSetInstance()
    • NOTE: Saves time from manual configuration and validation and even DB restart to make config changes persist
  • We can add any number of nodes using a one liner again – rs.addInstance(‘genexrepladmin@172.31.X.XXX:3318′);
    • NOTE: No more backup and restore , recording GTIP positions , recording binlog positions etc. MySQL CLONE rocks !!
  • InnoDB ReplicaSets have better write performance important as per the documentation
    • NOTE : we have not compared its performance however we have a use case to do and will update you all in coming weeks.
  • InnoDB ReplicaSets can be deployed on slower networks whereas InnoDB Cluster cannot be deployed.
  • Finally has to be mentioned that it is compatible with mysqlrouter like InnoDB Cluster for a common user end point for application to connect.
  • If you just want to migrate to GTID based replication in 8.0 from binlog based replication then that also can be through mysqlShell using just dba.configureReplicaSetInstance utility rest of the replicaSet creation step and all can be avoided.

Hope you find this information useful. Keep reading and follow us for more exclusive content for open source databases.

%d bloggers like this: