MySQL

Updating Percona XtraDB Cluster from 5.6.24-72.2 to 5.6.32-25.17

MySQL Performance Blog - Mon, 2016-11-07 19:35

This blog describes how to upgrade Percona XtraDB Cluster in place from 5.6.24-72.2 to 5.6.32-25.17.

This very hands-on blog is the result of some questions such as “can I perform an in-place upgrade for Percona XtraDB Cluster” coming in. We have done these minor upgrades for Percona Managed Services customers running Percona XtraDB Cluster with lots of nodes, and I think it’s feasible to smoothly do it – if we pay special attention to some specific points I’ll call out. The main concern you should have is that if you have a big dataset, you should avoid SST (which consumes a lot of time if a node rebuild is needed).

Make sure you have all the steps very clear in order to avoid spending too much time when updating packages. The crucial point is Galera’s API GCache size. If you’re executing this when part of the cluster is online, and writes cannot be avoided, check first if the current configuration for the GCache can avoid nodes being written to SST while shutting down Percona Server on each of the nodes, updating packages and finally getting Percona Server back up online again.

A blog post written by Miguel Angel Nieto provides instructions on how to check the GCache file’s size and make sure it’s covering all the transactions for the time you need to take the node out. After increasing the size of the GCache, if the new node finds all the missing transactions on the donor’s GCache, it goes to IST. If not, it will need to use SST.

You can read more about the difference between IST and SST in the Galera API documentation.

Little less talk, little more action…

At this point, we need to update the packages one cluster node at a time. The cluster needs to stay up. I’m going to use a cluster with three nodes. Node 01 is dedicated to writes, while nodes 02 and 03 are dedicated to scaling the cluster’s reads (all are running 5.6.24-72.2). Just for the reference, it’s running on CentOS 6.5, and I’m going to use yum, but you can convert that to any other package manager depending on the Linux distort you’re running. This is the list of nodes and the packages we need to update:

#: servers are like below (writes) node01::192.168.50.11:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL) (reads) node02::192.168.50.12:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL) (reads) node03::192.168.50.13:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL) #: packages currently installed [vagrant@node02 ~]$ sudo rpm -qa | grep Percona Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64
Before updating the packages above, make sure you update the XtraBackup package in case you have configured the variable wsrep_sst_method as xtrabackup-v2, this avoids the error below:

WSREP_SST: [ERROR] FATAL: The innobackupex version is 2.3.4. Needs xtrabackup-2.3.5 or higher to perform SST (2016102620:47:15.307) 2016-10-26 20:47:15 5227 [ERROR] WSREP: Failed to read 'ready <addr>' from: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.50.12' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '5227'  ''
So, on all three nodes, update percona-xtrabackup to make sure we’re running the latest version:

[root@node02 vagrant]# yum update percona-xtrabackup Loaded plugins: fastestmirror, versionlock Determining fastest mirrors ... --> Running transaction check ---> Package percona-xtrabackup.x86_64 0:2.3.4-1.el6 will be updated ---> Package percona-xtrabackup.x86_64 0:2.3.5-1.el6 will be an update
With that, take out of the cluster one node at a time, update all old binaries using yum update and start mysqld back up online. You don’t need to run mysql_upgrade in this case. When you start mysqld with the newer binaries in place, depending on the size of configured cache, it’s going to perform either an IST or SST.

As you’re going to take the node out of rotation and out of the cluster, you don’t need to worry about configuring it as read_only. If you can do that in a maintenance window, where no one is writing data to the main node, it’s the best scenario. You won’t need to worry about SST, as in most cases the dataset is too big (TB++) and the SST time can be some hours (an overnight streaming in my experience).

Let’s take out node02 and update the packages: #: let's take out node02 to update packages [vagrant@node02 ~]$ sudo /etc/init.d/mysql stop Shutting down MySQL (Percona XtraDB Cluster).... SUCCESS! [vagrant@node02 ~]$ sudo yum update Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64 ... Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.32-25.17.1.el6 will be an update ---> Package Percona-XtraDB-Cluster-devel-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-devel-56.x86_64 1:5.6.32-25.17.1.el6 will be an update ---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.15-1.rhel6 will be updated ---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.17-1.rhel6 will be an update ---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.32-25.17.1.el6 will be an update ---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.32-25.17.1.el6 will be an update #: new packages in place after yum update - here, make sure you run yum clean all before yum update [root@node02 ~]# rpm -qa | grep Percona Percona-XtraDB-Cluster-shared-56-5.6.32-25.17.1.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.17-1.rhel6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.32-25.17.1.el6.x86_64 Percona-XtraDB-Cluster-client-56-5.6.32-25.17.1.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.32-25.17.1.el6.x86_64
Now start node02, knowing that it’s going to join the cluster, but with updated packages:

[root@node02 vagrant]# /etc/init.d/mysql start Starting MySQL (Percona XtraDB Cluster)...State transfer in progress, setting sleep higher .. SUCCESS! #: here you can see that the state transfer was required due to different states from cluster and current node #: this is gonna test the wsrep_sst_method to make sure it’s working well after updating percona-xtrabackup #: to latest version available 2016-10-26 21:51:38 3426 [Note] WSREP: State transfer required:  Group state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52613  Local state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52611 2016-10-26 21:51:38 3426 [Note] WSREP: New cluster view: global state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52613, view# 2: Primary, number of nodes: 2, my index: 0, protocol version 3 2016-10-26 21:51:38 3426 [Warning] WSREP: Gap in state sequence. Need state transfer. 2016-10-26 21:51:38 3426 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.50.12' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '3426'  '' ' WSREP_SST: [INFO] Streaming with xbstream (20161026 21:51:39.023) WSREP_SST: [INFO] Using socat as streamer (20161026 21:51:39.025) WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} )(20161026 21:51:39.100) 2016-10-26 21:51:39 3426 [Note] WSREP: Prepared SST request: xtrabackup-v2|192.168.50.12:4444/xtrabackup_sst//1 ... 2016-10-26 21:51:39 3426 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 52613) 2016-10-26 21:51:39 3426 [Note] WSREP: Requesting state transfer: success, donor: 1 WSREP_SST: [INFO] Proceeding with SST (20161026 21:51:39.871) WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (2016102621:51:39.873) WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20161026 21:51:39.876) ... WSREP_SST: [INFO] Moving the backup to /var/lib/mysql/ (20161026 21:51:55.826) WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/etc/my.cnf  --defaults-group=mysqld --no-version-check  --datadir=/var/lib/mysql/ --move-back --force-non-empty-directories ${DATA} &>${DATA}/innobackup.move.log (2016102621:51:55.829) WSREP_SST: [INFO] Move successful, removing /var/lib/mysql//.sst (20161026 21:51:55.859) ... Version: '5.6.32-78.1-56'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release rel78.1,Revision 979409a, WSREP version 25.17, wsrep_25.17 2016-10-26 21:51:56 3426 [Note] WSREP: 0.0 (pxc01): State transfer from 1.0 (pxc01) complete. 2016-10-26 21:51:56 3426 [Note] WSREP: Shifting JOINER -> JOINED (TO: 52613) 2016-10-26 21:51:56 3426 [Note] WSREP: Member 0.0 (pxc01) synced with group. 2016-10-26 21:51:56 3426 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 52613) 2016-10-26 21:51:56 3426 [Note] WSREP: Synchronized with group, ready for connections 2016-10-26 21:51:56 3426 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
As you can see above, node02 is back in the cluster. Additionally, it’s important to see that both the Percona-Server packages and the Galera API packages were updated. When the node is up and part of the cluster, you should see a new API version in the output of a SHOW GLOBAL STATUS LIKE ‘wsrep%’ command:

#: node02, the one we just updated [root@node02 mysql]# mysql -e "show global status like 'wsrep_provider_version'G" *************************** 1. row *************************** Variable_name: wsrep_provider_version         Value: 3.17(r447d194) #: node01 not updated yet [root@node01 mysql]# mysql -e "show global status like 'wsrep_provider_version'G" *************************** 1. row *************************** Variable_name: wsrep_provider_version         Value: 3.15(r5c765eb)

Summarizing the procedure until now, the cluster packages update plan is:
  1. Take nodes out of rotation one at a time
  2. Shutdown mysqld on each node in order
  3. Update the below packages (or the ones corresponding to what you’re running):

[vagrant@node02 ~]$ sudo rpm -qa | grep Percona Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64

  1. Update percona-xtrabackup on all the cluster’s nodes to avoid issues (as explained above):

WSREP_SST: [ERROR] FATAL: The innobackupex version is 2.3.4. Needs xtrabackup-2.3.5 or higher to perform SST (2016102620:47:15.307) ... [root@node01 ~]# yum update percona-xtrabackup ... [root@node02 ~]# xtrabackup --version xtrabackup version 2.3.5 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 45cda89)

  1. Start mysqld back online to grab the cluster’s current state

After finishing up with each node’s packages update, make sure you check the main node to see if they have joined the cluster. On node01, you can enter the below query to return the main status variables. This checks the current status of node01 and the cluster size:

mysql> SELECT @@HOSTNAME AS HOST, NOW() AS `DATE`, VARIABLE_NAME,VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME IN ('wsrep_cluster_state_uuid','wsrep_cluster_conf_id','wsrep_cluster_size','wsrep_cluster_status','wsrep_local_state_comment')G *************************** 1. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_LOCAL_STATE_COMMENT VARIABLE_VALUE: Synced *************************** 2. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_CONF_ID VARIABLE_VALUE: 10 *************************** 3. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_SIZE VARIABLE_VALUE: 3 *************************** 4. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_STATE_UUID VARIABLE_VALUE: 1e0b9725-9c5e-11e6-886d-7708872d6aa5 *************************** 5. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_STATUS VARIABLE_VALUE: Primary 5 rows in set (0.00 sec)

Check the other nodes as well:

#: node02 [root@node02 mysql]# mysql -e "show global status like 'wsrep_local_state%'G" *************************** 1. row *************************** Variable_name: wsrep_local_state_uuid Value: 1e0b9725-9c5e-11e6-886d-7708872d6aa5 *************************** 2. row *************************** Variable_name: wsrep_local_state Value: 4 *************************** 3. row *************************** Variable_name: wsrep_local_state_comment Value: Synced #: node03 [root@node03 ~]# mysql -e "show global status like 'wsrep_local_state%'G" *************************** 1. row *************************** Variable_name: wsrep_local_state_uuid Value: 1e0b9725-9c5e-11e6-886d-7708872d6aa5 *************************** 2. row *************************** Variable_name: wsrep_local_state Value: 4 *************************** 3. row *************************** Variable_name: wsrep_local_state_comment Value: Synced

Cheers!

Categories: MySQL

My proposals for Percona Live: Window Functions and ANALYZE for statements

Sergey Petrunia's blog - Mon, 2015-11-30 17:05

I’ve made two session proposals for Percona Live conference:

if you feel these talks are worth it, please vote!

Categories: MySQL
Syndicate content