MySQL

Percona Toolkit and systemd

MySQL Performance Blog - Thu, 2015-09-03 15:18

After some recent work with systemd I’ve realized it’s power and I can come clean that I am a fan. I realize that there are multitudes of posts out there with arguments both for and against systemd but let’s look at some nice ways to have systemd provide us with (but not limited to) pt-kill-as-a-service.

This brief post introduces you to a systemd unit file and how we can leverage them to enable pt-kill at startup, have it start after mysqld and ensure that MySQL is running by using the mysql service as a dependency of pt-kill. By using systemd to handle this we don’t have to complicate matters by ‘monitoring the monitor’ using hacky shell scripts, cron or utilities like monit.

So then, a quick primer on systemd, because lets face it, we’ve all been avoiding it. Systemd is not new but it made recent headlines in the Linux world due to some of the major distros announcing their intentions to migrate upcoming releases to systemd.

What is it? Well due to it’s depth it is best described as a suite of management daemons, libraries and tools that will replace the traditional init scripts. So essentially remember how you start a service, mount a volume or read the system logs…well start forgetting all of that because systemd is disrupting this space. With systemd comes some really neat tricks for administering your machines and I’m really only beginning to see the tip of this iceberg. There is admittedly a lot to learn with systemd but this should serve as pragmatic entrée.

Systemd what? When did this happen?Linux distributionDate released as defaultArch Linux000000002012-10-01-0000October 2012CoreOS000000002013-10-01-0000October 2013 (v94.0.0)Debian000000002015-04-01-0000April 2015 (v8 aka jessie)Fedora000000002011-05-01-0000May 2011 (v15)Gentoo LinuxN/AMageia000000002012-05-01-0000May 2012 (v2.0)openSUSE000000002012-09-01-0000September 2012 (v12.2)Red Hat Enterprise Linux000000002014-06-01-0000June 2014 (v7.0)SlackwareN/ASUSE Linux Enterprise Server000000002014-10-01-0000October 2014 (v12)Ubuntu000000002015-04-01-0000April 2015 (v15.04)

Lennart Poettering, the name frequently attached with systemd is seeking to modernize the most fundamental process(es) of the Linux startup system, bringing the paradigms of modern computing; concurrency, parallelism and efficiency. The dependency tree of processes and services is more intuitive and the structure of the underlying startup scripts are unified. I feel that the direction proposed by systemd is an evolutionary one which promotes consistency within the startup scripts enabling conventions that can be easier understood by a broader audience.

Systemd and Percona Toolkit

This post aims to show that we can rely on systemd to handle processes such as pt-kill, pt-stalk, and other daemonized scripts that we like to have running perpetually, are fired at startup and can be reinstated after failure.

The scenario is this; I want pt-kill to drop all sleeping connections from a certain application user, lets call them, ‘caffeinebob’, because they never close connections. Due to various reasons we can’t make changes in the application so we’re employing Percona Toolkit favourite, pt-kill, to do this for us. For convenience we want this result to persist across server restarts. In the olden days we might have some cron job that fires a shell script in combination with a sentinal file to ensure it’s running. I’m pretty sure that this kitty could be skinned many ways.

The systemd Unit File

After some research and testing, the below unit file will play nicely on a Centos 7 node with systemd at it’s core. In this example I am running Percona Server 5.6 installed using Percona’s yum repo with the mysql.service unit file generated at installation. I suspect that there could be some systemd deviation with other MySQL variants however, this configuration is working for me.

[Unit] Description = pt-kill caffeinebob After=syslog.target mysql.service Requires=mysql.service [Service] Type = simple PIDFile = /var/run/ptkill.pid ExecStart = /usr/bin/pt-kill --daemonize --pid=/var/run/ptkill.pid --interval=5 --defaults-file=/root/.my.cnf --log=/var/log/ptkill.log --match-user caffeinebob --busy-time 10 --kill --print Restart=on-abort [Install] WantedBy=multi-user.target

Let’s examine the above and see what we’re working with. Systemd unit files have various biologies. The example above is a simple Service unit file. This means we are enacting a process controlled and supervised by systemd. The significance of the After directive is that this service will not attempt startup until after syslog.target and mysql.service have been called. The Required directive is makes ptkill.service dependant on the mysql.service startup being successful.

The next part, the [Service] grouping, details the actions to be taken by the service. The Type can be one of many but as it’s a simple call to a script I’ve used the simple type. We are describing the command and the handling of it. The ExecStart is evidently the pt-kill command that we would usually run from the shell prompt or from within a shell script. This is a very corse example because we can opt to parameterize the command with the assistance of an Environment file. Note the use of the Restart directive, used so that systemd can handle a reaction should a failure occur that interrupts the process.

Finally under the [Install] grouping we’re telling systemd that this service should startup on a multi user system, and could be thought of as runlevel 2 or 3 (Multiuser mode).

So providing that we’ve got all the relevant paths, users and dependencies in place, once you reboot your host, mysql.service should in order, initiate mysqld and when that dependency is met, systemd will initiate pt-kill with our desired parameters to cull connections that meet the criteria stipulated in our configuration. This means you rely on systemd to manage pt-kill for you and you don’t necessarily need to remember to start this or similar processes when you restart you node.

Start up & enable

Now to envoke our service manually and add enable it to work on start up we should run the following systemctl commands;

[moore@localhost ~]$ sudo systemctl start ptkill.service [moore@localhost ~]$ sudo systemctl enable ptkill.service

No feedback but no errors so we can check the status of the service

[moore@localhost ~]$ sudo systemctl status ptkill -l ptkill.service - keep pt-kill persistent across restarts Loaded: loaded (/etc/systemd/system/ptkill.service; enabled) Active: active (running) since Wed 2015-08-12 02:39:13 BST; 1h 19min ago Main PID: 2628 (perl) CGroup: /system.slice/ptkill.service └─2628 perl /usr/bin/pt-kill --daemonize --pid=/var/run/ptkill.pid --interval=5 --defaults-file=/root/.my.cnf --log=/var/log/ptkill.log --match-user caffeinebob --busy-time 10 --kill --print

Perfect we can also instruct systemd to disable this and|or stop our service when the application is changed and caffeinebob close() all those open connections.

[moore@localhost ~]$ sudo systemctl stop ptkill.service [moore@localhost ~]$ sudo systemctl disable ptkill.service

Now after successful implementation we see that our process is running delightfully;

[moore@localhost ~]$ ps -ef | grep pt-kill root 2547 1 0 02:37 ? 00:00:00 perl /usr/bin/pt-kill --daemonize --pid=/var/run/ptkill.pid --interval=5 --defaults-file=/root/.my.cnf --log=/var/log/ptkill.log --match-user caffeinebob --busy-time 10 --kill --print

Catch me if I fall

Lets issue a kill signal to the process and observe it’s behaviour using journalctl

[moore@localhost ~]$ sudo kill -SEGV 2547

This will write similar entries into the system log;

[moore@localhost ~]$ sudo journalctl -xn -f Aug 12 02:39:13 localhost.localdomain sudo[2624]: moore : TTY=pts/1 ; PWD=/home/moore ; USER=root ; COMMAND=/bin/kill -SEGV 2547 Aug 12 02:39:13 localhost.localdomain systemd[1]: ptkill.service: main process exited, code=killed, status=11/SEGV Aug 12 02:39:13 localhost.localdomain systemd[1]: Unit ptkill.service entered failed state. Aug 12 02:39:13 localhost.localdomain systemd[1]: ptkill.service holdoff time over, scheduling restart. Aug 12 02:39:13 localhost.localdomain systemd[1]: Stopping keep pt-kill persistent across restarts... -- Subject: Unit ptkill.service has begun shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ptkill.service has begun shutting down. Aug 12 02:39:13 localhost.localdomain systemd[1]: Starting keep pt-kill persistent across restarts... -- Subject: Unit ptkill.service has begun with start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ptkill.service has begun starting up. Aug 12 02:39:13 localhost.localdomain systemd[1]: Started keep pt-kill persistent across restarts. -- Subject: Unit ptkill.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ptkill.service has finished starting up. -- -- The start-up result is done.

Pt-kill flaps after the kill signal but systemd has been instructed to restart on failure so we don’t see caffeinebob saturate our processlist with sleeping connections.

Another bonus with this workflow is use within orchestration. Any standardized unit files can be propagated to your fleet of hosts with tools such as Ansible, Chef, Puppet or Saltstack.

Closing note

I’d love to hear from the pragmatists from the systemd world to understand if this approach can be improved or whether there are any flaws in this example unit file that would require addressing. This is very much a new-school of thought for me and feedback is both welcome and encouraged.

Thank you for your time, happy systemd-ing.

The post Percona Toolkit and systemd appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server 5.5.45-37.4 is now available

MySQL Performance Blog - Wed, 2015-09-02 14:36


Percona is glad to announce the release of Percona Server 5.5.45-37.4 on September 2, 2015. Based on MySQL 5.5.45, including all the bug fixes in it, Percona Server 5.5.45-37.4 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.45-37.5 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Querying INFORMATION_SCHEMA GLOBAL_TEMPORARY_TABLES table would crash threads working with internal temporary tables used by ALTER TABLE. Bug fixed #1113388.
  • FLUSH INDEX_STATISTICS/FLUSH CHANGED_PAGE_BITMAPS and FLUSH USER_STATISTICS/RESET CHANGE_PAGE_BITMAPS pairs of commands were inadvertently joined, i.e. issuing either command had the effect of both. The first pair, besides flushing both index statistics and changed page bitmaps, had the effect of FLUSH INDEX_STATISTICS requiring SUPER instead of RELOAD privilege. The second pair resulted in FLUSH USER_STATISTICS destroying changed page bitmaps. Bug fixed #1472251.
  • If a new connection thread was created while a SHOW PROCESSLIST command or a INFORMATION_SCHEMA.PROCESSLIST query was in progress, it could have a negative TIME_MS value returned in the PROCESSLIST output. Bug fixed #1379582.

Other bugs fixed: #768038 and #1472256.

Release notes for Percona Server 5.5.45-37.4 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.45-37.4 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Booking.com’s Jean-François Gagné on Percona Live Amsterdam

MySQL Performance Blog - Tue, 2015-09-01 10:00

Booking.com, one of the world’s leading e-commerce companies, helps travels book nearly 1 million rooms per night. Established in 1996, Booking.com B.V. guarantees the best prices for any type of property, from small, family-run bed and breakfasts to executive apartments and five-star luxury suites.

The travel website is also a dedicated contributor to the MySQL and Perl community. Other open source technologies include CentOS Linux, Nginx, python, puppet, Git and more.

A Diamond sponsor of Percona Live Amsterdam Sept. 21-23, you can meet the people who power Booking.com at booth 205. Enter promo code “BlogInterview” at registration to save €20!

In the meantime, meet Jean-François Gagné, a system engineer at Booking.com. He’ll be presenting a couple of talks: “Riding the Binlog: an in Deep Dissection of the Replication Stream” and “Binlog Servers at Booking.com.”

Tom: Hi Jean-François, in your session, “Riding the Binlog: an in Deep Dissection of the Replication Stream“, you talk about how we can think of the binary logs as a transport for a “Stream of Transactions”. What will be the top 3 things attendees will come away with following this 50-minute talk?

Jean-François: Hi Tom, thanks for this opportunity to give a sneak peak of my talk.  The most important subject that will be discussed is that the binary logs evolves: by the usage of “log-slave-updates”, the stream can grow, shrink or morph.  Said in another way: the binary logs of a slave can be very different from the binary logs of the master, and this should be taken into account when relying on those (including when replicating using intermediate master and when promoting a slave as a new master using GTIDs).  We will also explore how the binary logs can be decomposed in sub-streams, or viewed as the multiplexing of many streams.  We will also look for de-multiplexing functions and the new possibilities that are opened with that.

 

Tom: Percona Live, starting with this conference, has a new venue and a broader theme – now encompassing, in addition to MySQL, MongoDB, NoSQL and data in the cloud. Your thoughts? And what do think is missing – what would you change (if anything)?

Jean-François: I think you forget the best of all changes: going from a 2 day conference last year in London to a 3 day conference this year.  This will allow better knowledge exchange and I am very happy about that.  I think this event will be a success with a good balance of sessions focus on technologies and presentation about a specific use-case of those technologies.  If I had one wish: I would like to see more sessions about specific use-cases of NoSQL technologies with and in deep discussion about why they are a better choice than more traditional solutions: maybe more of those sessions will be submitted next year.

 

Tom: Which other session(s) are you most looking forward to besides your own?

Jean-François: I will definitely attend the Facebook session about Semi-Synchronous Replication: it is very close to my interest, especially as Booking.com is thinking about using loss-less semi-sync replication in the future, and I look forward to hear war stories about this feature.  All sessions dissecting internals of a technology (InnoDB, TokuDB, RocksDB, …) will also have my attention.  Finally, it is always interesting to hear about how large companies are using databases, so I plan to attend the MySQL@Wikimedia session.

 

Tom: As a resident of Amsterdam, what are some of the must-do activities/sightseeing for those visiting for Percona Live from out of town?

Jean-François: Seeing the city from a high point is impressive, and you will have the opportunity of enjoying that view from the Booking.com office at the Community Dinner.  Also, I recommend finding a bike and discover the city pedaling (there are many renting shops, just ask Google).  From the conference venue, you can do a 70 minutes ride crossing three nice parks: the Westerpark, the Rembrandtpark and the Vondelpark – https://goo.gl/P13Mc7 – and you can discover the first of third park in a shorter ride (45 minutes).  If you feel a little more adventurous, I recommend a 90 minute ride South following the Amstel: once out of Amsterdam, you will have the water on one side at the level of the road, and the fields (Polder) 3 meters below on the other side (https://goo.gl/OPDv5z).  This will allow you to see for yourself why this place is called the “Low Countries”.

The post Booking.com’s Jean-François Gagné on Percona Live Amsterdam appeared first on MySQL Performance Blog.

Categories: MySQL

High-load clusters and desynchronized nodes on Percona XtraDB Cluster

MySQL Performance Blog - Mon, 2015-08-31 10:00

There can be a lot of confusion and lack of planning in Percona XtraDB Clusters in regards to nodes becoming desynchronized for various reasons.  This can happen a few ways:

When I say “desynchronized” I mean a node that is permitted to build up a potentially large wsrep_local_recv_queue while some operation is happening.  For example a node taking a backup would set wsrep_desync=ON during the backup and potentially fall behind replication some amount.

Some of these operations may completely block Galera from applying transactions, while others may simply increase load on the server enough that it falls behind and applies at a reduced rate.

In all the cases above, flow control is NOT used while the node cannot apply transactions, but it MAY be used while the node is recovering from the operation.  For an example of this, see my last blog about IST.

If a cluster is fairly busy, then the flow control that CAN happen when the above operations catch up MAY be detrimental to performance.

Example setup

Let us take my typical 3 node cluster with workload on node1.  We are taking a blocking backup of some kind on node3 so we are executing the following steps:

  1. node3> set global wsrep_desync=ON;
  2. Node3’s “backup” starts, this starts with FLUSH TABLES WITH READ LOCK;
  3. Galera is paused on node3 and the wsrep_local_recv_queue grows some amount
  4. Node3’s “backup” finishes, finishing with UNLOCK TABLES;
  5. node3> set global wsrep_desync=OFF;
During the backup

This includes up through step 3 above.  My node1 is unaffected by the backup on node3, I can see it averaging 5-6k writesets(transactions) per second which it did before we began:

 

node2 is also unaffected:

but node3 is not applying and its queue is building up:

Unlock tables, still wsrep_desync=ON

Let’s examine briefly what happens when node3 is permitted to start applying, but wsrep_desync stays enabled:

node1’s performance is pretty much the same, node3 is not using flow control yet. However, there is a problem:

It’s hard to notice, but node3 is NOT catching up, instead it is falling further behind!  We have potentially created a situation where node3 may never catch up.

The PXC nodes were close enough to the red-line of performance that node3 can only apply just about as fast (and somewhat slower until it heats up a bit) as new transactions are coming into node1.

This represents a serious concern in PXC capacity planning:

Nodes do not only need to be fast enough to handle normal workload, but also to catch up after maintenance operations or failures cause them to fall behind.

Experienced MySQL DBA’s will realize this isn’t all that different than Master/Slave replication.

Flow Control as a way to recovery

So here’s the trick:  if we turn off wsrep_desync on node3 now, node3 will use flow control if and only if the incoming replication exceeds node3’s apply rate.  This gives node3 a good chance of catching up, but the tradeoff is reducing write throughput of the cluster.  Let’s see what this looks like in context with all of our steps.  wsrep_desync is turned off at the peak of the replication queue size on node3, around 12:20PM:

So at the moment node3 starts utilizing flow control to prevent falling further behind, our write throughput (in this specific environment and workload) is reduced by approximately 1/3rd (YMMV).   The cluster will remain in this state until node3 catches up and returns to the ‘Synced’ state.  This catchup is still happening as I write this post, almost 4 hours after it started and will likely take another hour or two to complete.

I can see a more realtime representation of this by using myq_status on node1, summarizing every minute:

[root@node1 ~]# myq_status -i 1m wsrep mycluster / node1 (idx: 1) / Galera 3.11(ra0189ab) Cluster Node Outbound Inbound FlowC Conflct Gcache Appl time P cnf # stat laten msgs data que msgs data que pause snt lcf bfa ist idx %ef 19:58:47 P 5 3 Sync 0.9ms 3128 2.0M 0 27 213b 0 25.4s 0 0 0 3003k 16k 62% 19:59:47 P 5 3 Sync 1.1ms 3200 2.1M 0 31 248b 0 18.8s 0 0 0 3003k 16k 62% 20:00:47 P 5 3 Sync 0.9ms 3378 2.2M 32 27 217b 0 26.0s 0 0 0 3003k 16k 62% 20:01:47 P 5 3 Sync 0.9ms 3662 2.4M 32 33 266b 0 18.9s 0 0 0 3003k 16k 62% 20:02:47 P 5 3 Sync 0.9ms 3340 2.2M 32 27 215b 0 27.2s 0 0 0 3003k 16k 62% 20:03:47 P 5 3 Sync 0.9ms 3193 2.1M 0 27 215b 0 25.6s 0 0 0 3003k 16k 62% 20:04:47 P 5 3 Sync 0.9ms 3009 1.9M 12 28 224b 0 22.8s 0 0 0 3003k 16k 62% 20:05:47 P 5 3 Sync 0.9ms 3437 2.2M 0 27 218b 0 23.9s 0 0 0 3003k 16k 62% 20:06:47 P 5 3 Sync 0.9ms 3319 2.1M 7 28 220b 0 24.2s 0 0 0 3003k 16k 62% 20:07:47 P 5 3 Sync 1.0ms 3388 2.2M 16 31 251b 0 22.6s 0 0 0 3003k 16k 62% 20:08:47 P 5 3 Sync 1.1ms 3695 2.4M 19 39 312b 0 13.9s 0 0 0 3003k 16k 62% 20:09:47 P 5 3 Sync 0.9ms 3293 2.1M 0 26 211b 0 26.2s 0 0 0 3003k 16k 62%

This reports around 20-25 seconds of flow control every minute, which is consistent with that ~1/3rd of performance reduction we see in the graphs above.

Watching node3 the same way proves it is sending the flow control (FlowC snt):

mycluster / node3 (idx: 2) / Galera 3.11(ra0189ab) Cluster Node Outbound Inbound FlowC Conflct Gcache Appl time P cnf # stat laten msgs data que msgs data que pause snt lcf bfa ist idx %ef 17:38:09 P 5 3 Dono 0.8ms 0 0b 0 4434 2.8M 16m 25.2s 31 0 0 18634 16k 80% 17:39:09 P 5 3 Dono 1.3ms 0 0b 1 5040 3.2M 16m 22.1s 29 0 0 37497 16k 80% 17:40:09 P 5 3 Dono 1.4ms 0 0b 0 4506 2.9M 16m 21.0s 31 0 0 16674 16k 80% 17:41:09 P 5 3 Dono 0.9ms 0 0b 0 5274 3.4M 16m 16.4s 27 0 0 22134 16k 80% 17:42:09 P 5 3 Dono 0.9ms 0 0b 0 4826 3.1M 16m 19.8s 26 0 0 16386 16k 80% 17:43:09 P 5 3 Jned 0.9ms 0 0b 0 4957 3.2M 16m 18.7s 28 0 0 83677 16k 80% 17:44:09 P 5 3 Jned 0.9ms 0 0b 0 3693 2.4M 16m 27.2s 30 0 0 131k 16k 80% 17:45:09 P 5 3 Jned 0.9ms 0 0b 0 4151 2.7M 16m 26.3s 34 0 0 185k 16k 80% 17:46:09 P 5 3 Jned 1.5ms 0 0b 0 4420 2.8M 16m 25.0s 30 0 0 245k 16k 80% 17:47:09 P 5 3 Jned 1.3ms 0 0b 1 4806 3.1M 16m 21.0s 27 0 0 310k 16k 80%

There are a lot of flow control messages (around 30) per minute.  This is a lot of ON/OFF toggles of flow control where writes are briefly delayed rather than a steady “you can’t write” for 20 seconds straight.

It also interestingly spends a long time in the Donor/Desynced state (even though wsrep_desync was turned OFF hours before) and then moves to the Joined state (this has the same meaning as during an IST).

Does it matter?

As always, it depends.

If these are web requests and suddenly the database can only handle ~66% of the traffic, that’s likely a problem, but maybe it just slows down the website somewhat.  I want to emphasize that WRITES are what is affected here.  Reads on any and all nodes should be normal (though you probably don’t want to read from node3 since it is so far behind).

If this were some queue processing that had reduced throughput, I’d expect it to possibly catch up later

This can only be answered for your application, but the takeaways for me are:

  • Don’t underestimate your capacity requirements
  • Being at the redline normally means you are well past the redline for abnormal events.
  • Plan for maintenance and failure recoveries
  • Where possible, build queuing into your workflows so diminished throughput in your architecture doesn’t generate failures.

Happy clustering!

Graphs in this post courtesy of VividCortex.

The post High-load clusters and desynchronized nodes on Percona XtraDB Cluster appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Toolkit 2.2.15 is now available

MySQL Performance Blog - Fri, 2015-08-28 17:30

Percona is pleased to announce the availability of Percona Toolkit 2.2.15.  Released August 28, 2015. Percona Toolkit is a collection of advanced command-line tools to perform a variety of MySQL server and system tasks that are too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release is the current GA (Generally Available) stable release in the 2.2 series. It includes multiple bug fixes as well as continued preparation for MySQL 5.7 compatibility. Full details are below. Downloads are available here and from the Percona Software Repositories.

New Features:

  • Added --max-flow-ctl option to pt-online-schema-change and pt-archiver with a value set in percent. When a Percona XtraDB Cluster node is very loaded, it sends flow control signals to the other nodes to stop sending transactions in order to catch up. When the average value of time spent in this state (in percent) exceeds the maximum provided in the option, the tool pauses until it falls below again.Default is no flow control checking.
  • Added the --sleep option for pt-online-schema-change to avoid performance problems. The option accepts float values in seconds.
  • Implemented ability to specify --check-slave-lag multiple times for pt-archiver. The following example enables lag checks for two slaves:
    pt-archiver --no-delete --where '1=1' --source h=oltp_server,D=test,t=tbl --dest h=olap_server --check-slave-lag h=slave1 --check-slave-lag h=slave2 --limit 1000 --commit-each
  • Added the --rds option to pt-kill, which makes the tool use Amazon RDS procedure calls instead of the standard MySQL kill command.

Bugs Fixed:

  • Fixed bug 1042727: pt-table-checksum doesn’t reconnect the slave $dbh
    Before, the tool would die if any slave connection was lost. Now the tool waits forever for slaves.
  • Fixed bug 1056507: pt-archiver --check-slave-lag agressiveness
    The tool now checks replication lag every 100 rows instead of every row, which significantly improves efficiency.
  • Fixed bug 1215587: Adding underscores to constraints when using pt-online-schema-change can create issues with constraint name length
    Before, multiple schema changes lead to underscores stacking up on the name of the constraint until it reached the 64 character limit. Now there is a limit of two underscores in the prefix, then the tool alternately removes or adds one underscore, attempting to make the name unique.
  • Fixed bug 1277049pt-online-schema-change can’t connect with comma in password
    For all tools, documented that commas in passwords provided on the command line must be escaped.
  • Fixed bug 1441928: Unlimited chunk size when using pt-online-schema-change with --chunk-size-limit=0 inhibits checksumming of single-nibble tables
    When comparing table size with the slave table, the tool now ignores --chunk-size-limit if it is set to zero to avoid multiplying by zero.
  • Fixed bug 1443763: Update documentation and/or implentation of pt-archiver --check-interval
    Fixed the documentation for --check-interval to reflect its correct behavior.
  • Fixed bug 1449226: pt-archiver dies with “MySQL server has gone away” when --innodb_kill_idle_transaction is set to a low value and --check-slave-lag is enabled
    The tool now sends a dummy SQL query to avoid timing out.
  • Fixed bug 1446928: pt-online-schema-change not reporting meaningful errors
    The tool now produces meaningful errors based on text from MySQL errors.
  • Fixed bug 1450499: ReadKeyMini causes pt-online-schema-change session to lock under some circumstances
    Removed ReadKeyMini, because it is no longer necessary.
  • Fixed bug 1452914: --purge and --no-delete are mutually exclusive, but still allowed to be specified together by pt-archiver
    The tool now issues an error when --purge and --no-delete are specified together.
  • Fixed bug 1455486: pt-mysql-summary is missing the --ask-pass option
    Added the --ask-pass option to the tool.
  • Fixed bug 1457573: pt-mysql-summary fails to download pt-diskstats pt-pmp pt-mext pt-align
    Added the -L option to curl and changed download address to use HTTPS.
  • Fixed bug 1462904: pt-duplicate-key-checker doesn’t support triple quote in column name
    Updated TableParser module to handle literal backticks.
  • Fixed bug 1488600: pt-stalk doesn’t check TokuDB status
    Implemented status collection similar to how it is performed for InnoDB.
  • Fixed bug 1488611: various testing bugs related to newer Perl versions

Details of the release can be found in the release notes and the 2.2.15 milestone on Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

The post Percona Toolkit 2.2.15 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

ObjectRocket’s David Murphy talks about MongoDB, Percona Live Amsterdam

MySQL Performance Blog - Wed, 2015-08-26 20:07

Say hello to David Murphy, lead DBA and MongoDB Master at ObjectRocket (a Rackspace company). David works on sharding, tool building, very large-scale issues and high-performance MongoDB architecture. Prior to ObjectRocket he was a MySQL/NoSQL architect at Electronic Arts. David enjoys large-scale operational tool building, high performance OS and database tuning. He is also a core code contributor to MongoDB. He’ll be speaking next month at Percona Live Amsterdam, which runs Sept. 21-13. Enter promo code “BlogInterview” at registration to save €20!

Tom: David, your 3-hour tutorial is titled “Mongo Sharding from the trench: A Veterans field guide.” Did your experience in working with vast amounts of data at Rackspace give you a unique perspective, in view, that now puts you into a position to help people just getting started? Can you give a couple examples?

David: I think this has been something organically I grew into from the days of supporting Cpanel type MySQL instances to today. I have worked for a few verticals from hosts to advertising to gaming, finally entering into the platform service. The others give me a host of knowledge around how customer need systems to work, and then the number and range of workloads we see at Rackspace re-enforces this.

ObjectRocket’s David Murphy

Many times the unique perspective comes with the scale such as someone calling up a single node to the multi-terabyte range. When they go to “shard” they can find the process that is normally very light and unnoticeable to most Mongo sharding can severally lock the metadata for an extended time. In other cases, the “balancer” might not be able to keep up with the amount of working being asked of it.

Toward the smaller end of the spectrum, having seen so many workloads from big to small. I can see similar thought processes and trends. When this happens having worked with some many of these workloads, and honestly having learned along the evolution of mongo helps me explain to clients the good, bad, and the hairy. Many times discussions come down to people not using connection pooling, non-indexed sorting, or complex operators such as $in, $nin, and more. In these cases, I can talk to people about the balance of using these concepts and when they will become bigger issues for them. My goal is to give them the enough knowledge to help determine when it is correct to use development resource to fix and issue, and when it’s manageable and that development could be better spent elsewhere.

 

Tom: The title of your tutorial also sounds like the perfect title of a book. Do you have any for one?

David: What an excellent question! I have thought about this. However, I think the goal of a book if I can find the time to do it. A working title might be “Mongo from the trenches: Surviving the minefield to get ahead”. I think the book might be broken into three sections:  “When should you use or not user Mongo”,  “Schema and Operatorators in the NoSQL world”, “Sharding”. I would do this as this could be a great mini book on its own the community really could use a level of depth similar to the MySQL 5.0 certification guides.  I liked these books as it helped someone understand all the bits of what to consider with your schema design and how it affects the application as much as the database hosts. Then in the second half more administration geared it took those same schema and design choices to help you manage them with confidence.

In the end, Mongo is a good product that works well for most people as it matures we need more and discussion. On topics such as what should you monitor, how you should predict issues, and how valuable are regular audits. Especially in an ecosystem where it’s easy to spin something up, launch it, and move on to the next project.

 

Tom: When and why would you recommend using MongoDB instead of MySQL?

David: I am glad I mentioned this is worthy of a book already, as it is such a complex topic and one that gets me very excited.

I feel there is a bit or misinformation on both sides of this field. Many in the MySQL camp of experts know when someone says they can’t get more than 1000 TPS via MySQL. 9 out of 10 times and design, not a technology issue,  the Mongo crowd love this and due to inherit sharding nature of Mongo they can sidestep these types of issues. Conversely in the Mongo camp you will hear how bad the  SQL standard is, however, omitting transactions for a moment, the same types of operations exist in MySQL and Mongo.  There are some interesting powers in the Mongo aggregation. However, SQL is more powerful and just as complex as some map reduce jobs and aggregations I have written.

As to your question, MySQL will always win in regards to repeatable reads to the database in a transaction. There is some talk of limited transactions in Mongo. However, these will likely not become global and cluster wide anytime soon if ever.  I don’t trust floats in Mongo for financials; it’s not that Mongo doesn’t do them but rather JavaScript type floats are present. Sometimes you need to store data as a 64-bit integer and do math in the app to make it a high precision float. MySQL, on the other hand, has excellent support for precision.

Another area is simply looking at the history of Mongo and MySQL.  Mongo until WiredTiger and  RocksDB were very similar to MyISAM from a locking behavior and support perspective. With the advent of the new storage system, we will-will see major leaps forward in types of flows you will want in Mongo. With the writer lock issue is gone, and locking between the systems is becoming more and more similar making deciding which much harder.

The news is not all use. However, subdocuments and array support in Mongo is amazing there are so many things  I can do in Mongo that even in bitwise SET/ENUM operators I could not do. So if you need that type of system, or you want to create a semi denormalize for of a view in the database. Mongo can do this with ease and on the fly. MySQL, on the other hand, would take careful planning and need whole tables updated.  In this regard I feel more people could use Mongo and is ability to have a versioned document schema allowing more incremental changes to documents. With new code  releases, allowing the application to read old version and “upgrade” them to the latest form. Removing a whole flurry of maintenance related pains that RDBMs have to the frustration of developers who just want to launch the new product.

The last thing I would want to say here is you need not choose, why not use both. Mongo can be very powerful for keeping a semi denormalized version of the data that is nimble to allow fast application or system updates and features. Leaving MySQL for a very specific workload that need the precision are simple are not expected to have schema changes.  I am a huge fan of keeping the transactional portions in MySQL, and the rest in Mongo. Allowing you to scale quickly up and down the build of your data needs, and more slowly change the parts that need to be 100% consistent all of the time with no room for eventual consistency.

 

Tom: What another session(s) are you most looking forward to besides your own at Percona Live Amsterdam?

David: There are a few that are near and dear to me.

Turtles all the way down: tuning Linux for database workloads” looks like a great one. It is one view I have always had, and DBA’s should be DBA’s,  SysAdmins, and Storage people rolled into one. That way they can understand the impacts of the application down to the blocks the database reads.

TokuDB internals” is another one. I have used TokuDB in MySQL and Mongo to some degree but as it has never had in-depth documentation. A topic like that is a great way to fill any gaps for experienced and new people alike.

Database Reliability Engineering” looks like a great talk from a great speaker.

As an InnoDB geek, I like the idea around “Understanding InnoDB locks: case studies.”

I see a huge amount of potential for MaxScale if anyone else is curious, “Anatomy of a Proxy Server: MaxScale Internals” should be good for R/W splits and split writing type cases.

Finally, one of my favorite people is Charity as she always is so energetic and can get to the heart of the matter. If you are not going to “Upgrade your database: without losing your data, your perf or your mind” you are missing out!

 

Tom: Thanks for speaking with me, David! Is there anything else you’d like to add: either about Rackspace or Percona Live Amsterdam?

David: In regards to Rackspace, I urge everyone to check out the Data Services group.  We handle everything from Redis to Hadoop with a goal of augmenting your groups or providing experts to help keep your uptime as high as possible. With options for dedicated hosts to platform type services, there is something that helps everyone. Rackspace is not just a cloud company but a real support company that provides amazing hardware to use, or support for other hardware location that is growing rapidly.

With Percona Amsterdam, everyone should come the group of speakers is simply amazing, I for one am excited by so many topics because they are all so compelling. Outside of that you will it hard find another a gathering of database experts with multiple technologies under their belt and who truly believe in the move to picking the right technology for the right use case.

The post ObjectRocket’s David Murphy talks about MongoDB, Percona Live Amsterdam appeared first on MySQL Performance Blog.

Categories: MySQL

Advanced Query Tuning in MySQL 5.6 and MySQL 5.7 Webinar: Q&A

MySQL Performance Blog - Mon, 2015-08-24 14:16

Thank you for attending my July 22 webinar titled “Advanced Query Tuning in MySQL 5.6 and 5.7” (my slides and a replay available here). As promised here is the list of questions and my answers (thank you for your great questions).

Q: Here is the explain example:

mysql> explain extended select id, site_id from test_index_id where site_id=1 *************************** 1. row *************************** id: 1 select_type: SIMPLE table: test_index_id type: ref possible_keys: key_site_id key: key_site_id key_len: 5 ref: const rows: 1 filtered: 100.00 Extra: Using where; Using index

why is site_id a covered index for the query, given the fact that a) we are selecting “id”, b) key_site_id only contains site_id?

As the table is InnoDB, all secondary keys will always contain primary key (“id”); in this case the secondary index will contain all needed information to satisfy the above query and key_site_id will be “covered index”

Q: Applications change over time. Do you suggest doing a periodic analysis of indexes that are being used and drop the ones that are not? If yes, any suggestions as to tackle that?

Yes, that is a good idea. Usually it can be done easily with Percona toolkit or Performance_schema in MySQL 5.6

  1. Enable slow query log and log every query, then use Pt-index-usage tool
  2. Or use the following query (as suggested by FromDual blog post):

SELECT object_schema, object_name, index_name FROM performance_schema.table_io_waits_summary_by_index_usage WHERE index_name IS NOT NULL AND count_star = 0 ORDER BY object_schema, object_name;

Q: Does the duplicate index is found on 5.6/5.7 will that causes an performance impact to the db while querying?

Duplicate keys can have negative impact on selects:

  1. MySQL can get confused and choose a wrong index
  2. Total index size can grow, which can cause MySQL to run out of RAM

Q: What is the suggested method to measure performance on queries (other than the slow query log) so as to know where to create indexes?

Slow query log is most common method. In MySQL 5.6 you can also use Performance Schema and use events_statements_summary_by_digest table.

Q: I’m not sure if this was covered in the webinar but… are there any best-practices for fulltext indexes?

That was not covered in this webinar, however, I’ve done a number of presentations regarding Full Text Indexes. For example: Creating Geo Enabled Applications with MySQL 5.6

Q: What would be the limit on index size or number of indexes you can defined per table?

There are no limits on Index size on disk, however, it will be good (performance wise) to have active indexes fit in RAM.

In InnoDB there are a number of index limitations, i.e. a table can contain a maximum of 64 secondary indexes.

Q:  If a table has two columns you would like to sum, can you have that sum indexed as a calculated index? To add to that, can that calculated index have “case when”?

Just to clarify, this is only a feature of MySQL 5.7 (not released yet).

Yes, it is documented now:

CREATE TABLE triangle ( sidea DOUBLE, sideb DOUBLE, sidec DOUBLE AS (SQRT(sidea * sidea + sideb * sideb)) );

Q: I have noticed that you created indexes on columns like DayOfTheWeek with very low cardinality. Shouldn’t that be a bad practice normally?

Yes, you are right! Unless, you are doing queries like “select count(*) from … where DayOfTheWeek = 7” those indexes may not be very useful.

Q: I saw an article that if you don’t specify a primary key upfront mysql / innodb creates one in the background (hidden). Is it different from a primary key itself, if most of the where fields that are used not in the primary / semi primary key? And is there a way to identify the tables with the hidden primary key indexes?

The “hidden” primary key will be 6 bytes, which will also be appended (duplicated) to all secondary keys. You can create an INT primary key auto_increment, which will be smaller (if you do not plan to store more than 4 billion rows). In addition, you will not be able to use the hidden primary key in your queries.

The following query (against information_schema) can be used to find all tables without declared primary key (with “hidden” primary key):

SELECT tables.table_schema, tables.table_name, tables.table_rows FROM information_schema.tables LEFT JOIN ( SELECT table_schema, table_name FROM information_schema.statistics GROUP BY table_schema, table_name, index_name HAVING SUM( CASE WHEN non_unique = 0 AND nullable != 'YES' THEN 1 ELSE 0 END ) = COUNT(*) ) puks ON tables.table_schema = puks.table_schema AND tables.table_name = puks.table_name WHERE puks.table_name IS NULL AND tables.table_type = 'BASE TABLE' AND engine='InnoDB'

You may also use mysql.innodb_index_stats table to find rows with the hidden primary key:

Example:

mysql> select * from mysql.innodb_index_stats; +---------------+------------+-----------------+---------------------+--------------+------------+-------------+-----------------------------------+ | database_name | table_name | index_name | last_update | stat_name | stat_value | sample_size | stat_description | +---------------+------------+-----------------+---------------------+--------------+------------+-------------+-----------------------------------+ | test | t1 | GEN_CLUST_INDEX | 2015-08-08 20:48:23 | n_diff_pfx01 | 96 | 1 | DB_ROW_ID | | test | t1 | GEN_CLUST_INDEX | 2015-08-08 20:48:23 | n_leaf_pages | 1 | NULL | Number of leaf pages in the index | | test | t1 | GEN_CLUST_INDEX | 2015-08-08 20:48:23 | size | 1 | NULL | Number of pages in the index | +---------------+------------+-----------------+---------------------+--------------+------------+-------------+-----------------------------------+

Q: You are using the alter table to create index, but how does mysql sort the data for creating the index? isn’t it uses temp table for that?

That is a very good question: the behavior of the “alter table … add index” has changed over time. As documented in Overview of Online DDL:

Historically, many DDL operations on InnoDB tables were expensive. Many ALTER TABLE operations worked by creating a new, empty table defined with the requested table options and indexes, then copying the existing rows to the new table one-by-one, updating the indexes as the rows were inserted. After all rows from the original table were copied, the old table was dropped and the copy was renamed with the name of the original table.

MySQL 5.5, and MySQL 5.1 with the InnoDB Plugin, optimized CREATE INDEX and DROP INDEX to avoid the table-copying behavior. That feature was known as Fast Index Creation

When MySQL uses “Fast Index Creation” operation it will create a set of temporary files in MySQL’s tmpdir:

To add a secondary index to an existing table, InnoDB scans the table, and sorts the rows using memory buffers and temporary files in order by the values of the secondary index key columns. The B-tree is then built in key-value order, which is more efficient than inserting rows into an index in random order.

Q: How good is InnoDB deadlocks on 5.7 comparing to 5.6 version. Is that based on parameters setup?

InnoDB deadlocks discussion is outside of the scope of this presentation. Valerii Kravchuk and Nilnandan Joshi did an excellent talk at Percona Live 2015 (slides available): Understanding Innodb Locks and Deadlocks

Q: What is the performance impact of generating a virtual column for a table having 66 Million records and generating the index. And how would you go about it? Do you have any suggestions on how to re organize indexes on the physical disk?

As MySQL 5.7 is not released yet, behavior of the virtual columns may change.  The main question here is: will it be online operations to a) add a virtual column (as this is only metadata change it should be very light operation anyway). b) add index on that virtual column. In the labs released it was not online, however this can change.

Thank you again for attending.

The post Advanced Query Tuning in MySQL 5.6 and MySQL 5.7 Webinar: Q&A appeared first on MySQL Performance Blog.

Categories: MySQL

Find unused indexes on MongoDB and TokuMX

MySQL Performance Blog - Fri, 2015-08-21 15:33

Finding and removing unused indexes is a pretty common technique to improve overall performance of relational databases. Less indexes means faster insert and updates but also less disk space used. The usual way to do it is to log all queries’ execution plans and then get a list of those indexes that are not used. Same theory applies to MongoDB and TokuMX so in this blog post I’m going to explain how to find those.

Profiling in MongoDB

To understand what profiling is you only need to think about MySQL’s slow query log, it is basically the same idea. It can be enabled with the following command:

db.setProfilingLevel(level, slowms)

There are three different levels:

0: No profiling enabled.
1: Only those queries slower than “slowms” are profiled.
2: All queries are profiled, similar to query_long_time=0.

Once it is enabled you can use db.system.profile.find().pretty() to read it. You would need to scan through all profiles and find those indexes that are never used. To make things easier there is a javascript program that will find the unused indexes after reading all the profile information. Unfortunately, it only works with mongodb 2.x.

The javascript is hosted in this github project https://github.com/wfreeman/indexalizer You just need to start mongo shell with indexStats.js and run db.indexStats() command. This is an sample output:

scanning profile {ns:"test.col"} with 2 records... this could take a while. { "query" : { "b" : 1 }, "count" : 1, "index" : "", "cursor" : "BtreeCursor b_1", "millis" : 0, "nscanned" : 1, "n" : 1, "scanAndOrder" : false } { "query" : { "b" : 2 }, "count" : 1, "index" : "", "cursor" : "BtreeCursor b_1", "millis" : 0, "nscanned" : 1, "n" : 1, "scanAndOrder" : false } checking for unused indexes in: col this index is not being used: "_id_" this index is not being used: "a_1"

 

So “a_1” is not used and could be dropped. We can ignore “_id_” because that one is needed

There is a problem with profiling. It will affect performance so you need to run it only for some hours and usually during low peak. That means that there is a possibility that not all possible queries from your application are going to be executed during that maintenance window. What alternative TokuMX provides?

Finding unused indexes in TokuMX

Good news for all of us. TokuMX doesn’t require you to enable profiling. Index usage statistics are stored as part of every query execution and you can access them with a simple db.collection.stats() command. Let me show you an example:

> db.col.stats() [...] { "name" : "a_1", "count" : 5, "size" : 140, "avgObjSize" : 28, "storageSize" : 16896, "pageSize" : 4194304, "readPageSize" : 65536, "fanout" : 16, "compression" : "zlib", "queries" : 0, "nscanned" : 0, "nscannedObjects" : 0, "inserts" : 0, "deletes" : 0 }, { "name" : "b_1", "count" : 5, "size" : 140, "avgObjSize" : 28, "storageSize" : 16896, "pageSize" : 4194304, "readPageSize" : 65536, "fanout" : 16, "compression" : "zlib", "queries" : 2, "nscanned" : 2, "nscannedObjects" : 2, "inserts" : 0, "deletes" : 0 } ], "ok" : 1 }

 

There are our statistics without profiling enabled. queries means the number of times that index has been used on a query execution. b_1 has been used twice and a_1 has never been used. You can use this small javascript code I’ve written to scan all collections inside the current database:

db.forEachCollectionName(function (cname) { output = db.runCommand({collstats : cname }); print("Checking " + output.ns + "...") output.indexDetails.forEach(function(findUnused) { if (findUnused.queries == 0) { print( "Unused index: " + findUnused.name ); }}) });

 

An example using the same data:

> db.forEachCollectionName(function (cname) { ... output = db.runCommand({collstats : cname }); ... print("Checking " + output.ns + "...") ... output.indexDetails.forEach(function(findUnused) { if (findUnused.queries == 0) { print( "Unused index: " + findUnused.name ); }}) ... ... }); Checking test.system.indexes... Checking test.col... Unused index: a_1

 

Conclusion

Finding unused indexes is a regular task that every DBA should do. In MongoDB you have to use profiling while in TokuMX nothing needs to be enabled because it will gather information by default without impacting service performance.

The post Find unused indexes on MongoDB and TokuMX appeared first on MySQL Performance Blog.

Categories: MySQL

Optimizing PXC Xtrabackup State Snapshot Transfer

MySQL Performance Blog - Thu, 2015-08-20 15:38
State Snapshot Transfer (SST) at a glance

PXC uses a protocol called State Snapshot Transfer to provision a node joining an existing cluster with all the data it needs to synchronize.  This is analogous to cloning a slave in asynchronous replication:  you take a full backup of one node and copy it to the new one, while tracking the replication position of the backup.

PXC automates this process using scriptable SST methods.  The most common of these methods is the xtrabackup-v2 method which is the default in PXC 5.6.  Xtrabackup generally is more favored over other SST methods because it is non-blocking on the Donor node (the node contributing the backup).

The basic flow of this method is:

  • The Joiner:
    • joins the cluster
    • Learns it needs a full SST and clobbers its local datadir (the SST will replace it)
    • prepares for a state transfer by opening a socat on port 4444 (by default)
    • The socat pipes the incoming files into the datadir/.sst directory
  • The Donor:
    • is picked by the cluster (could be configured or be based on WAN segments)
    • starts a streaming Xtrabackup and pipes the output of that via socat to the Joiner on port 4444.
    • Upon finishing its backup, sends an indication of this and the final Galera GTID of the backup is sent to the Joiner
  • The Joiner:
    • Records all changes from the Donor’s backup’s GTID forward in its gcache (and overflow pages, this is limited by available disk space)
    • runs the –apply-log phase of Xtrabackup on the donor
    • Moves the datadir/.sst directory contents into the datadir
    • Starts mysqld
    • Applies all the transactions it needs (Joining and Joined states just like IST does it)
    • Moves to the ‘Synced’ state and is done.

There are a lot of moving pieces here, and nothing is really tuned by default.  On larger clusters, SST can be quite scary because it may take hours or even days.  Any failure can mean starting over again from the start.

This blog will concentrate on some ways to make a good dent in the time SST can take.  Many of these methods are trade-offs and may not apply to your situations.  Further, there may be other ways I haven’t thought of to speed things up, please share what you’ve found that works!

The Environment

I am testing SST on a PXC 5.6.24 cluster in AWS.  The nodes are c3.4xlarge and the datadirs are RAID-0 over the two ephemeral SSD drives in that instance type.  These instances are all in the same region.

My simulated application is using only node1 in the cluster and is sysbench OLTP with 200 tables with 1M rows each.  This comes out to just under 50G of data.  The test application runs on a separate server with 32 threads.

The PXC cluster itself is tuned to best practices for Innodb and Galera performance

Baseline

In my first test the cluster is a single member (receiving workload) and I am  joining node2.  This configuration is untuned for SST.  I measured the time from when mysqld started on node2 until it entered the Synced state (i.e., fully caught up).  In the log, it looked like this:

150724 15:59:24 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql ... lots of other output ... 2015-07-24 16:48:39 31084 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 4647341)

Doing some math on the above, we find that the SST took 51 minutes to complete.

–use-memory

One of the first things I noticed was that the –apply-log step on the Joiner was very slow.  Anyone who uses Xtrabackup a lot will know that –apply-log will be a lot faster if you give it some extra RAM to use while making the backup consistent via the –use-memory option.  We can set this in our my.cnf like this:

[sst] inno-apply-opts="--use-memory=20G"

The [sst] section is a special one understood only by the xtrabackup-v2 script.  inno-apply-opts allows me to specify arguments to innobackupex when it runs.

Note that this change only applies to the Joiner (i.e., you don’t have to put it on all your nodes and restart them to take advantage of it).

This change immediately makes a huge improvement to our above scenario (node2 joining node1 under load) and the SST now takes just over 30 minutes.

wsrep_slave_threads

Another slow part of getting to Synced is how long it takes to apply transactions up to realtime after the backup is restored and in place on the Joiner.  We can improve this throughput by increasing the number of apply threads on the Joiner to make better use of the CPU.  Prior to this wsrep_slave_threads was set to 1, but if I increase this to 32  (there are 16 cores on this instance type) my SST now takes 25m 32s

Compression

xtrabackup-v2 supports adding a compression process into the datastream.  On the Donor it compresses and on the Joiner it decompresses.  This allows you to trade CPU for transfer speed.  If your bottleneck turns out to be network transport and you have spare CPU, this can help a lot.

Further, I can use pigz instead of gzip to get parallel compression, but theoretically any compression utilization can work as long as it can compress and decompress standard input to standard output.  I install the ‘pigz’ package on all my nodes and change my my.cnf like this:

[sst] inno-apply-opts="--use-memory=20G" compressor="pigz" decompressor="pigz -d"

Both the Joiner and the Donor must have the respective decompressor and compressor settings or the SST will fail with a vague error message (not actually having pigz installed will do the same thing).

By adding compression, my SST is down to 21 minutes, but there’s a catch.  My application performance starts to take a serious nose-dive during this test.  Pigz is consuming most of the CPU on my Donor, which is also my primary application node.  This may or may not hurt your application workload in the same way, but this emphasizes the importance of understanding (and measuring) the performance impact of SST has on your Donor nodes.

Dedicated donor

To alleviate the problem with the application, I now leave node2 up and spin up node3.  Since I’m expecting node2 to normally not be receiving application traffic directly, I can configure node3 to prefer node2 as its donor like this:

[mysqld] ... wsrep_sst_donor = node2,

When node3 starts, this setting instructs the cluster that node3 is the preferred donor, but if that’s not available, pick something else (that’s what the trailing comma means).

Donor nodes are permitted to fall behind in replication apply as needed without sending flow control.  Sending application traffic to such a node may see an increase in the amount of stale data as well as certification failures for writes (not to mention the performance issues we saw above with node1).  Since node2 is not getting application traffic, moving into the Donor state and doing an expensive SST with pigz compression should be relatively safe for the rest of the cluster (in this case, node1).

Even if you don’t have a dedicated donor, if you use a load balancer of some kind in front of your cluster, you may elect to consider Donor nodes as failing their health checks so application traffic is diverted during any state transfer.

When I brought up node3, with node2 as the donor, the SST time dropped to 18m 33s

Conclusion

Each of these tunings helped the SST speed, though the later adjustments maybe had less of a direct impact.  Depending on your workload, database size, network and CPU available, your mileage may of course vary.  Your tunings should vary accordingly, but also realize you may actually want to limit (and not increase) the speed of state transfers in some cases to avoid other problems. For example, I’ve seen several clusters get unstable during SST and the only explanation for this is the amount of network bandwidth consumed by the state transfer preventing the actual Galera communication between the nodes. Be sure to consider the overall state of production when tuning your SSTs.

The post Optimizing PXC Xtrabackup State Snapshot Transfer appeared first on MySQL Performance Blog.

Categories: MySQL

How much could you benefit from MySQL 5.6 parallel replication?

MySQL Performance Blog - Wed, 2015-08-19 10:00

I have heard this question quite often: “At busy times, our replicas start lagging quite frequently. We are using N schemas, so which performance boost could we expect from MySQL 5.6 parallel replication?” Here is a quick way to give you a rough estimate of the potential benefit.

General idea

In MySQL 5.6, parallelism is added at the schema level. So in theory, if you have N schemas and if you use N parallel threads, replication could be up to N times faster. This assumes at least 2 things:

  • Replication throughput scales linearly with the number of parallel threads.
  • Writes are evenly distributed across schemas.

Both assumptions are of course not realistic. But it is easy to know the distribution of writes, and that can already give you an idea about how much you could benefit from parallel replication.

Writes are stored in binary logs but it is much easier to work with the slow query log, so we can enable full slow query logging for some time with long_query_time = 0 and then use pt-query-digest to analyze the resulting log file.

An example

I have a test server with 3 schemas, and I’ve run some sysbench load on it to get a decent slow query log file. Once done, I can run this command:

pt-query-digest --filter '$event->{arg} !~ m/^select|^set|^commit|^show|^admin|^rollback|^begin/i' --group-by db --report-format profile slow_query.log > digest.out

and here is the result I get:

# Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ======== ============== ====== ====== ===== ==== # 1 0x 791.6195 52.1% 100028 0.0079 0.70 db3 # 2 0x 525.1231 34.5% 100022 0.0053 0.68 db1 # 3 0x 203.4649 13.4% 100000 0.0020 0.64 db2

In a perfect world, with 3 parallel threads and if each schema would handle 33% of the total write workload, I could expect a 3x performance improvement.

However here we can see in the report that the 3 replication threads will only work simultaneously 25% of the time in the best case (13.4/52.1 = 0.25). We can also expect 2 replication threads to work simultaneously for some part of the workload, but let’s ignore that for clarity.

It means that instead of the theoretical 200% performance improvement (3 parallel threads 100% of the time), we can hardly expect more than a 50% performance improvement (3 parallel threads 25% of the time). And the reality is that the benefit will be much lower than that.

Conclusion

Parallel replication in MySQL 5.6 is a great step forward, however don’t expect too much if your writes are not evenly distributed across all your schemas. The pt-query-digest trick I shared can give you a rough idea whether your workload is a good fit for multi-threaded slaves in 5.6.

I’m expecting much better results for 5.7, partly because parallelism is handled differently, but also because you can tune how efficient parallel replication will be by adjusting the binlog group commit settings.

The post How much could you benefit from MySQL 5.6 parallel replication? appeared first on MySQL Performance Blog.

Categories: MySQL

Featured Talk: The Future of Replication is Today: New Features in Practice

MySQL Performance Blog - Tue, 2015-08-18 23:17

In the past years, both MySQL 5.6, MySQL 5.7 and MariaDB 10 have been successful implementing new features. For many DBAs, the “old way” of replicating data is comfortable so taking the action to implement these new features seems like a momentous leap rather then a simple step. But perhaps it isn’t that complicated…

Giuseppe Maxia, a Quality Assurance Architect at VMware and loyal member of the Percona Live Conference Committee will be presenting “The Future of Replication is Today: New Features in Practice” at the Percona Live Data Performance Conference this September in Amsterdam.
Percona’s Community Manager, Tom Diederich had an opportunity to catch up with Giuseppe last week and get an in-depth look at some of the items Giuseppe will be covering in his talk in addition to getting his take on some of the hot sessions to hit while at the conference.  This is how it went:

(Hint: Read to the end to find a special discount code) 

 

Tom: Your talk is titled, “The Future of Replication is today: new features in practice.” What are the top 3 areas in which replication options have improved in MySQL 5.6, MySQL 5.7, and MariaDB 10?Giuseppe: Replication has been stagnant for over 10 years. Before MySQL 5.6, the only important change in the technology was the introduction of row-based replication in 2008. After that, we had to wait till 2013 to see global transaction identifiers in MySQL 5.6, followed by the same feature, with different implementation in 2014 with MariaDB 10. GTID has been complemented, in both flavors, with crash-safe replication tables, which is a feature that guarantees a reliable resume of replication after a server failure. There is also the parallel applier, a minor feature that has been implemented in both MySQL 5.6 and MariaDB, and improved in latest versions, although it seems to lack proper support for monitoring. The last feature that was introduced in MySQL 5.6 and MariaDB 10 is multi-source replication, i.e. the ability of replicating from multiple masters to a single slave. In both editions, the implementation is quite simple, and not so different from what DBAs are used to do for regular replication.Tom: For DBAs, how difficult will it be to make the change from the “old way” of replicating data — to stop using the same comfortable features that have been around for several years — and put into practice some of the latest features?Giuseppe: The adoption of new features can be deceptively simple. For example, GTID in MariaDB comes out of the box and its adoption could be as easy as running a backup followed by a restore, but it can produce unpleasant results if you try to combine this feature with multi-source replication without planning ahead. That said, the transition could be simpler than its counterpart in MySQL.MySQL 5.6 and 5.7 require some reconfiguration to run GTID, and users can face unpleasant failures due to the complexity of the rules applying to this feature. They will need to read the manual thoroughly and test the deployment extensively before trusting an upgrade in production.For multi-source replication, the difficulties are, in my experience, hidden in the users expectations. When speaking about multi-source (or multi-masters, as it is commonly referred to), many users have the mistaken expectation that they can easily insert anything in multiple masters as if they were doing it in a single server. However, the nature of asynchronous replication and the current implementation of multi-source topologies do not handle conflicts, and this fact will probably surprise and anger the early adopters.Tom: What is still missing in replication technology? How can MySQL improve?Giuseppe: There are two areas where the current implementation is lacking. The first one is monitoring data: while new features have been adding up to replication, there is not enough effort made to cover the monitoring needs. The current way of monitoring replication is hard-wired around the original replication feature, and little has been done to give the users a deeper view of what is going on. With the latest releases at our disposal, we can run parallel replication using multiple masters, and yet we have very little visibility on what goes on inside the dozen of threads that the new features can unchain inside a single slave. It’s like driving a F1 racing car with the dashboard of a Ford model-T. MySQL 5.7 has moved a few steps in that direction, with the new replication tables in performance_schema, but it is still a drop in the ocean compared to what we need.The second area where replication is still too much tied with its past is in heterogeneous replication. While relational databases are still dominating the front-end of the web economy, its back-end is largely being run by different structures, such as Hadoop, MongoDB, Cassandra. Moving data back and forth between the relational storage and its growing siblings has become an urgent need. There have been a few sparks of change in this direction, but nothing that can qualify as promising changes.Tom: Which other session(s) are you most looking forward to besides your own?Giuseppe: I am always interested in the sessions that explain and discuss new features. I am most interested in the talks by Oracle engineers, who have been piling up many features in the latest years, and I am sure they have something more up their sleeve that will appear at the conference. I also attend eagerly sessions about complementary tools, which are usually highly educational and often give me more ideas.

Want to read more on the topic? Visit Giuseppe’s blog:

 MySQL Replication Monitoring 101

The Percona Live Data Performance Conference is the premier event for the rich and diverse MySQL, NoSQL and data in the cloud ecosystems in Europe. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and IoT (Internet of Things) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

This year’s conference will feature one day of tutorials and two days of keynote talks and breakout sessions related to MySQL, NoSQL and Data in the Cloud. Attendees will get briefed on the hottest topics, learn about building and maintaining high-performing deployments and hear from top industry leaders.

The Percona Live Europe Data Performance Conference will be September 21-23 at the Mövenpick Hotel Amsterdam City Centre.

Register using code “FeaturedTalk” and save 20 euros off of registration!

Hope to see you in Amsterdam!

The post Featured Talk: The Future of Replication is Today: New Features in Practice appeared first on MySQL Performance Blog.

Categories: MySQL

MySQL is crashing: a support engineer’s point of view

MySQL Performance Blog - Mon, 2015-08-17 11:38

In MySQL QA Episode #12, “MySQL is Crashing, now what?,” Roel demonstrated how to collect crash-related information that will help Percona discover what the issue is that you are experiencing, and fix it.

As a Support Engineer I (Sveta) am very happy to see this post – but as a person who better understands writing than recording – I’d like to have same information, in textual form. We discussed it, and decided to do a joint blog post. Hence, this post

If you haven’t seen the video yet, or you do not have any experience with gdb, core files and crashes, I highly recommend to watch it first.

Once you have an idea of why crashes happen, what to do after it happens in your environment, and how to open a Support issue and/or a bug report, you’re ready for the next step: which information do you need to provide? Note that the more complete and comprehensive information you provide, the quicker the evaluation and potential fix process will go – it’s a win-win situation!

At first we need the MySQL error log file. If possible, please send us the full error log file. Often users like to send only the part which they think is relevant, but the error log file can contain other information, recorded before the crash happened. For example, records about table corruptions, lack of disk space, issues with InnoDB dictionary, etc.

If your error log is quite large, please note it would compress very well using a standard compression tool like gzip. If for some reason you cannot send the full error log file, please sent all lines, written after the words “mysqld: ready for connections” (as seen the last time before the actual crash), until the end of the error log file (alternatively, you can also search for rows, started with word “Version:”). Or, if you use scripts (or mysqld_safe) which automatically restart MySQL Server in case of disaster, obviously please search for the one-previous server start after the crash.

An example which includes an automatic restart as mentioned above:

2015-08-03 14:24:03 9911 [Note] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: ready for connections. Version: '5.6.25-73.1-log' socket: '/tmp/mysql_sandbox21690.sock' port: 21690 Percona Server (GPL), Release 73.1, Revision 07b797f 2015-08-03 14:24:25 7f5b193f9700 InnoDB: Buffer pool(s) load completed at 150803 14:24:25 11:25:12 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=268435456 read_buffer_size=131072 max_used_connections=1 max_threads=216 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 348059 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x40000 /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(my_print_stacktrace+0x2e)[0x8dd38e] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_fatal_signal+0x491)[0x6a5dc1] /lib64/libpthread.so.0(+0xf890)[0x7f5c58ac8890] /lib64/libc.so.6(__poll+0x2d)[0x7f5c570fbc5d] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z26handle_connections_socketsv+0x1c2)[0x5f64c2] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z11mysqld_mainiPPc+0x1b5d)[0x5fd87d] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f5c57040b05] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld[0x5f10fd] You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 150803 14:25:12 mysqld_safe Number of processes running now: 0 150803 14:25:12 mysqld_safe mysqld restarted /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: /lib64/libssl.so.1.0.0: no version information available (required by /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld) /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: /lib64/libcrypto.so.1.0.0: no version information available (required by /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld) 2015-08-03 14:25:12 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2015-08-03 14:25:12 0 [Note] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld (mysqld 5.6.25-73.1-log) starting as process 10038 ... 2015-08-03 14:25:12 10038 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 50005) 2015-08-03 14:25:12 10038 [Warning] Buffered warning: Changed limits: max_connections: 214 (requested 10000) 2015-08-03 14:25:12 10038 [Warning] Buffered warning: Changed limits: table_open_cache: 400 (requested 4096) 2015-08-03 14:25:12 10038 [Note] Plugin 'FEDERATED' is disabled. 2015-08-03 14:25:12 10038 [Note] InnoDB: Using atomics to ref count buffer pool pages 2015-08-03 14:25:12 10038 [Note] InnoDB: The InnoDB memory heap is disabled 2015-08-03 14:25:12 10038 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2015-08-03 14:25:12 10038 [Note] InnoDB: Memory barrier is not used 2015-08-03 14:25:12 10038 [Note] InnoDB: Compressed tables use zlib 1.2.3 2015-08-03 14:25:12 10038 [Note] InnoDB: Using Linux native AIO 2015-08-03 14:25:12 10038 [Note] InnoDB: Using CPU crc32 instructions 2015-08-03 14:25:12 10038 [Note] InnoDB: Initializing buffer pool, size = 4.0G 2015-08-03 14:25:13 10038 [Note] InnoDB: Completed initialization of buffer pool 2015-08-03 14:25:13 10038 [Note] InnoDB: Highest supported file format is Barracuda. 2015-08-03 14:25:13 10038 [Note] InnoDB: The log sequence numbers 514865622 and 514865622 in ibdata files do not match the log sequence number 514865742 in the ib_logfiles! 2015-08-03 14:25:13 10038 [Note] InnoDB: Database was not shutdown normally! 2015-08-03 14:25:13 10038 [Note] InnoDB: Starting crash recovery. 2015-08-03 14:25:13 10038 [Note] InnoDB: Reading tablespace information from the .ibd files... 2015-08-03 14:25:14 10038 [Note] InnoDB: Restoring possible half-written data pages 2015-08-03 14:25:14 10038 [Note] InnoDB: from the doublewrite buffer... InnoDB: Last MySQL binlog file position 0 150866, file name mysql-bin.000006 2015-08-03 14:25:16 10038 [Note] InnoDB: 128 rollback segment(s) are active. 2015-08-03 14:25:16 10038 [Note] InnoDB: Waiting for purge to start 2015-08-03 14:25:16 10038 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.25-rel73.1 started; log sequence number 514865742 2015-08-03 14:25:16 7f67ceff9700 InnoDB: Loading buffer pool(s) from .//ib_buffer_pool 2015-08-03 14:25:16 10038 [Note] Recovering after a crash using mysql-bin 2015-08-03 14:25:16 10038 [Note] Starting crash recovery... 2015-08-03 14:25:16 10038 [Note] Crash recovery finished. 2015-08-03 14:25:17 10038 [Note] RSA private key file not found: /home/sveta/sandboxes/rsandbox_Percona-Server-5_6_25/master/data//private_key.pem. Some authentication plugins will not work. 2015-08-03 14:25:17 10038 [Note] RSA public key file not found: /home/sveta/sandboxes/rsandbox_Percona-Server-5_6_25/master/data//public_key.pem. Some authentication plugins will not work. 2015-08-03 14:25:17 10038 [Note] Server hostname (bind-address): '127.0.0.1'; port: 21690 2015-08-03 14:25:17 10038 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 2015-08-03 14:25:17 10038 [Note] Server socket created on IP: '127.0.0.1'. 2015-08-03 14:25:17 10038 [Warning] 'proxies_priv' entry '@ root@thinkie' ignored in --skip-name-resolve mode. 2015-08-03 14:25:17 10038 [Note] Event Scheduler: Loaded 0 events 2015-08-03 14:25:17 10038 [Note] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: ready for connections. Version: '5.6.25-73.1-log' socket: '/tmp/mysql_sandbox21690.sock' port: 21690 Percona Server (GPL), Release 73.1, Revision 07b797f

Usually the error log file contains the actual query which caused the crash. If it does not and you know the query (for example, if your application logs errors / query problems), please send us this query too. Additionally, if possible, include the CREATE TABLE statements for any tables mentioned in the query. Actually working with the query is the first thing which you can do to resolve the issue: try to run this query (on a non-production/test server which is as close a copy to your production server as possible), to ensure it crashes MySQL Server consistently. If so, you can try and create a temporary workaround by avoiding this kind of queries in your application.

If you have doubts as to which query caused the crash, but have the general query log turned ON, you can use utilityparse_general_log.pl  from percona-qa to create a potential test case. Simply execute:

$ sudo yum install bzr $ cd ~ $ bzr branch lp:percona-qa $ cp /path_that_contains_your_general_log/your_log_file.sql ~ $ ~/percona-qa/parse_general_log.pl -i./your_log_file.sql -o./output.sql

And subsequently execute output.sql against mysqld on a non-production test server to see if a crash is produced. Alternatively, you may mail us the output.sql file (provided your company privacy etc. policies allow for this). If you want to try and reduce the testcase further, please see QA Episode #7 on reducing testcases.

The next thing which we need is a backtrace. You usually have a simple backtrace showing in the error log directly after crash. An example (extracted from an error log) of what this looks like:

stack_bottom = 0 thread_stack 0x40000 /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(my_print_stacktrace+0x2e)[0x8dd38e] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_fatal_signal+0x491)[0x6a5dc1] /lib64/libpthread.so.0(+0xf890)[0x7f5c58ac8890] /lib64/libc.so.6(__poll+0x2d)[0x7f5c570fbc5d] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z26handle_connections_socketsv+0x1c2)[0x5f64c2] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z11mysqld_mainiPPc+0x1b5d)[0x5fd87d] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f5c57040b05] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld[0x5f10fd]

Note that the above backtrace is mangled. You can send us the file like this (we can demangle it). However, if you want to work with it yourself more comfortably you can unmangle it with help ofc++filt  utility:

sveta@linux-85fm:~/sandboxes/rsandbox_Percona-Server-5_6_25> cat master/data/msandbox.err | c++filt ... terribly wrong... stack_bottom = 0 thread_stack 0x40000 /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(my_print_stacktrace+0x2e)[0x8dd38e] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_fatal_signal+0x491)[0x6a5dc1] /lib64/libpthread.so.0(+0xf890)[0x7f5c58ac8890] /lib64/libc.so.6(__poll+0x2d)[0x7f5c570fbc5d] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_connections_sockets()+0x1c2)[0x5f64c2] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(mysqld_main(int, char**)+0x1b5d)[0x5fd87d] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f5c57040b05] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld[0x5f10fd] ...

Now the backtrace looks much nicer. When sending error log reports to us, please to not use c++filt on them before sending. We have a list of known bugs, and to scan the known bugs list we need to receive the error log unaltered.

You can also turn core files ON. Core files are memory dump files, created when a process crashes. They are very helpful for debugging, because they contain not only the backtrace of crashing thread, but backtraces of all threads, and much of what was in memory at the time the crash occurred.

Sidenote: Please note it is always a good idea to have the debuginfo (for example Percona-Server-56-debuginfo.x86_64 from the Percona Repository) package installed. This package provides the debugging symbols for Percona server (there are similar packages for other distributions) and ensures that stack traces are more readable and contain more information. It is important to ensure that you have the right package version etc. as symbols are different for each version of mysqld. If you have installed Percona Server from our repository, you can simply install the debuginfo package, the version will be correct, and the package will be auto-updated when Percona Server is updated.

By default the MySQL server does not create core files. To let it do so, you can follow instructions from the “GDB Cheat sheet” (page 2 under header ‘Core Files Cheat Sheet’). In short:

  • Add the option core-file under the [mysqld] section of your configuration file
  • Tune your OS options, so it allows mysqld to create core files as described in the cheat sheet
    echo "core.%p.%e.%s" > /proc/sys/kernel/core_pattern ulimit -c unlimited sudo sysctl -w fs.suid_dumpable=2

    Note: some systems use ‘kernel’ instead of ‘fs’: use kernel.suid_dumpable=2 instead if you get key or file warnings/errors.
  • Restart the MySQL server

Besides the core file which is generated by the MySQL server, you can also setup the operating system to dump a core file. These are two different core files (for a single crash of the mysqld binary), and the amount of information contained within may differ. The procedure above shows how to setup the one for the MySQL Server alone.

If you like your operating system to dump a core file as well, please see the MySQL QA Episode #12 video. Also, please note that changing the ulimit and fs.suid_dumpable settings may alter the security of your system. Please read more about these options online before using them or leaving them permanently on a production system.

Once a core file is generated, you can use the GDB utility to debug the core file (also called a ‘coredump’). GDB allows you to better resolve backtraces (also called ‘stack traces’ or ‘stacks’), for example by taking a back trace of all threads instead of only the crashing threads. This is off-course better then the single backtrace available in the error log file. To use GDB, you need to first start it:

gdb /path_to_mysqld /path_to_core

/path_to_core is usually your data directory (for coredumps produced by mysqld as a result of using the –core-file option in your my.cnf file), or sometimes in the same directory where the crashing binary is (for coredumps produced by the OS) – though you can specify an alternate fixed location for OS coredumps as the cheat sheet. Note that OS generated dumps are sometimes written with very few privileges and so you may have to use chown/chmod/sudo to access it.

Once you’re into GDB, and all looks fine, run the commandsbt(backtrace) and  bt thread apply all(get backtrace for all threads) to get the stacktraces. bt should more or less match the backtrace seen in the error log, but sometimes this may differ.

For us, ideally you would run the following commands in GDB (as seen in the cheat sheet):

set trace-commands on set pagination off set print pretty on set print array on set print array-indexes on set print elements 4096 set logging file gdb_standard.txt set logging on thread apply all bt set logging off set logging file gdb_full.txt set logging on thread apply all bt full

After you run these commands and have existed ( quit ) GDB, please send us the ./gdb_standard.txt  and./gdb_full.txt files.

Finally, we would be happy to receive the actual core file from you. In terms of security and privacy, please note that a core file often contains fragments, or sections, or even the full memory of your server.

However, a core file without mysqld is useless, thus please add the mysqld binary together with the core file. If you use our compiled binaries you can also specify the exact package and file name which you downloaded, but if you use a self-compiled version of the server, the mysqld binary is required for us to resolve backtrace and other necessary information (like varialbes) from your core file. Generally speaking, it’s easier just to sent mysqld along.

Also, it would be really nice, if you send us library files which are dynamically linked with mysqld you use. You can get them by using a tool, calledldd_files.sh  from the percona-qa suite. Just create a temporary directory, copy yourmysqld  binary to it and run the tool on it:

sveta@thinkie:~/tmp> wget http://bazaar.launchpad.net/~percona-core/percona-qa/trunk/download/head:/ldd_files.sh-20150713030145-8xdk0llrd3skfsan-1/ldd_files.sh sveta@thinkie:~/tmp> mkdir tmp sveta@thinkie:~/tmp> cd tmp/ sveta@thinkie:~/tmp/tmp> cp /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld . # Copy of your mysqld sveta@thinkie:~/tmp/tmp> ../ldd_files.sh mysqld # Run the tool on it cp: cannot stat ‘./mysqld: /lib64/libssl.so.1.0.0: no version information available’: No such file or directory # Ignore cp: cannot stat ‘./mysqld: /lib64/libcrypto.so.1.0.0: no version information available’: No such file or directory # Ignore sveta@thinkie:~/tmp/tmp> ls ld-linux-x86-64.so.2 libaio.so.1 libcrypto.so.1.0.0 libcrypt.so.1 libc.so.6 libdl.so.2 libgcc_s.so.1 libm.so.6 libpthread.so.0 librt.so.1 libssl.so.1.0.0 libstdc++.so.6 libz.so.1 mysqld # Files to supply in combination with mysqld

These library files are needed if case some of the frames from the stacktrace are system calls, so that our developers can resolve/check those frames also.

Summary

If you hit a crash, please send us (in order of preference, but even better ‘all of these’):

  • The error log file (please sent it unaltered – i.e. before c++filt was executed – which allows us to scan for known bugs)
  • The crashing query (from your application logs and/or extracted from the core file – ref the query extraction blog post)
    • Please include the matching CREATE TABLE statements
  • A resolved backtrace (and/or preferably the ./gdb_standard.txt and ./gdb_full.txt files)
  • The core file together with the mysqld binary and preferably the ldd files

Thank you!

 

The post MySQL is crashing: a support engineer’s point of view appeared first on MySQL Performance Blog.

Categories: MySQL

The language of compression

MySQL Performance Blog - Thu, 2015-08-13 19:19

Leif Walsh will talk about the language of compression at Percona Live Amsterdam

Storage. Everyone needs it. Whether your data is in MySQL, a NoSQL, or somewhere in the cloud, with ever-growing data volumes – along with the need for SSDs to cut latency and replication to provide insurance – an organization’s storage footprint is an important place to look for savings. That’s where compression comes in (squeeze!) to save disk space.

Two Sigma software engineer Leif Walsh speaks the language of compression. Fluently. In fact, he’ll be speaking on

that exact subject September 22 during the Percona Live conference in Amsterdam.

I asked him about his talk, and about Amsterdam, the other day. Here’s what he had to say.

* * *

Tom: Hi Leif, how will your talk help IT decision-makers cut through the marketing mumbo-jumbo on what’s important to focus on and what is not
Leif: My talk will have three lessons aimed at those making storage decisions for their company:

  1. What are the key factors to consider when evaluating storage options, and how can they affect your bottom line?  This is not only how storage tech influences your hardware, operations, and management costs, but also how it can facilitate new development initiatives and cut time-to-market for your products.
  2. How should you read benchmarks and marketing materials about storage technology?  You’ll learn what to look for in promotional material, and how to think critically about whether that material is applicable to your business needs.
  3. What’s the most effective way to communicate with storage vendors about your application’s requirements?  A lot of time can be spent in the early stages of a relationship in finding a common language for users and vendors to have meaningful discussions about users’ needs and vendors’ capacity to meet those needs.  With the tools you’ll learn in my talk, you’ll be able to accelerate quickly to the high-bandwidth conversations you need to have in order to make the right decision, and consequently, you’ll be empowered to evaluate more choices to find the best one faster.

Tom: In addition to IT decision-makers, who else should attend your session and what will they take away afterward?
Leif: My talk is primarily about the language that everyone in the storage community should be using to communicate. Therefore, storage vendors should attend to get ideas for how  to express their benchmarks and their system’s properties more effectively, and application developers and operations people will learn strategies for getting better support and for making a convincing case to the decision makers in their own company.

Tom: Which session(s) are you most looking forward to besides your own?
Leif: Sam Kottler is a good friend and an intensely experienced systems engineer with a dynamic and boisterous personality, so I can’t wait to hear more about his experiences with Linux tuning.

As one of the original developers of TokuMX, I’ll absolutely have to check out Stephane’s talk about it, but I promise not to heckle. Charity Majors is always hilarious and has great experiences and insights to share, so I’ll definitely check out her talk too.

* * *

Catch Leif’s talk at Percona Live in Amsterdam September 21-23. Enter the promo code “BlogInterview” at registration and save €20! Register now!

The post The language of compression appeared first on MySQL Performance Blog.

Categories: MySQL

MySQL Quality Assurance: A Vision for the Future by Roel Van de Paar (Final Episode 13)

MySQL Performance Blog - Thu, 2015-08-13 17:13

Welcome to the final – but most important – episode in the MySQL QA Series.

In it, I present my vision for all MySQL Quality Assurance – for all distributions – worldwide.

Episode 13: A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving

1. pquery Review
2. Random Coverage Testing
3. SQL Interleaving
4. The past & the future

Presented by Roel Van de Paar. Full-screen viewing @ 720p resolution recommended

Interested in the full MySQL QA Series?

The post MySQL Quality Assurance: A Vision for the Future by Roel Van de Paar (Final Episode 13) appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Live Amsterdam Discounted Pricing and Community Dinner!

MySQL Performance Blog - Wed, 2015-08-12 21:37

The countdown is on for the annual Percona Live Data Performance Conference and Expo in Europe! This year the conference will be taking place in the great city of Amsterdam September 21-23rd. This three day conference will focus on the latest trends, news and best practices in the MySQL, NoSQL and Data in the Cloud markets, while looking forward to what’s on the long-term horizon within the global Data Performance industry. With 84 breakout sessions, 8 tutorial sessions and 6 keynotes, including a Data in the Cloud keynote panel, there will certainly be no lack of content.
Advanced Rate Registration ENDS August 16th so make sure to register now to secure the best price possible.

As it is a Percona Live Conference, there will certainly be no lack of FUN either!!!!

As tradition holds, there will be a Community Dinner. Tuesday night, September 22, Percona Live Diamond Sponsor Booking.com will be hosting the Community Dinner of the year at their very own headquarters located in historic Rembrandt Square in the heart of the city. After breakout sessions conclude, attendees will be picked up right outside of the venue and taken to booking.com’s headquarters by canal boats! This gives all attendees the opportunity to play “tourist” while viewing the beauty of Amsterdam from the water. Attendees will be dropped off right next to Booking.com’s office! Come and show your support for the community while enjoying dinner and drinks! Space is limited so make sure to sign up ASAP!

So don’t forgetregister for the conference and sign up for the community dinner before space is gone!
See you in Amsterdam!

The post Percona Live Amsterdam Discounted Pricing and Community Dinner! appeared first on MySQL Performance Blog.

Categories: MySQL

MySQL replication primer with pt-table-checksum and pt-table-sync

MySQL Performance Blog - Wed, 2015-08-12 07:00

MySQL replication is a process that allows you to easily maintain multiple copies of MySQL data by having them copied automatically from a master to a slave database.

It’s essential to make sure the slave servers have the same set of data as the master to ensure data is consistent within the replication stream. MySQL slave server data can drift from the master for many reasons – e.g. replication errors, accidental direct updates on slave, etc.

Here at Percona Support we highly recommend that our customers periodically run the pt-table-checksum tool to verify data consistency within replication streams. Specifically, after fixing replication errors on slave servers to ensure that the slave has identical data as its master. As you don’t want to put yourself in a situation where you need to failover to a slave server for some reason and you find different data on that slave server.

In this post, I will examine the pt-table-checksum and pt-table-sync tools usage from Percona Toolkit on different replication topologies. We often receive queries from customers about how to run these tools and I hope this post will help.

Percona Toolkit is a free collection of advanced command-line tools to perform a variety of MySQL server and system tasks that are too difficult or complex to perform manually.

One of those tools is pt-table-checksum, which works by dividing table rows into chunks of rows. The size of a chunk changes dynamically during the operation to avoid overloading the server. pt-table-checksum has many safeguards including variation into chunk size to make sure queries run in a desired amount of time.

pt-table-checksum verifies chunk size by running EXPLAIN query on each chunk. It also monitors slave server’s continuously in order to make sure replicas not falls too far behind and in this case tool pauses itself to allow slave to catch up. Along with that there are many other safeguards builtin and you can find all the details in this documentation

In my first example case, I am going to run pt-table-checksum against pair of replication servers – i.e. master having only one slave in replication topology. We will run pt-table-checksum tool on master server to verify data integrity on slave and in case If differences found by pt-table-checksum tool we will sync those changes on slave server via pt-table-sync tool.

I have created a dummy table under test database and inserted 10 records on master server as below:

mysql-master> create table dummy (id int(11) not null auto_increment primary key, name char(5)) engine=innodb; Query OK, 0 rows affected (0.08 sec) mysql-master> insert into dummy VALUES (1,'a'), (2,'b'), (3,'c'), (4,'d'), (5,'e'), (6,'f'), (7,'g'), (8,'h'), (9,'i'), (10,'j'); Query OK, 10 rows affected (0.00 sec) Records: 10 Duplicates: 0 Warnings: 0 mysql-master> select * from dummy; +------+------+ | id | name | +------+------+ | 1 | a | | 2 | b | | 3 | c | | 4 | d | | 5 | e | | 6 | f | | 7 | g | | 8 | h | | 9 | i | | 10 | j | +------+------+ 10 rows in set (0.00 sec)

Then I intentionally deleted a few records from the slave server to make it inconsistent with the master for the purpose of this post.

mysql-slave> delete from dummy where id>5; Query OK, 5 rows affected (0.03 sec) mysql-slave> select * from dummy; +----+------+ | id | name | +----+------+ | 1 | a | | 2 | b | | 3 | c | | 4 | d | | 5 | e | +----+------+ 5 rows in set (0.00 sec)

Now, in this case the master server has 10 records on the dummy table while the slave server has only 5 records missing records from id>5 – we will run pt-table-checksum at this point on the master server to see if the pt-table-checksum tool catches those differences.

[root@master]# pt-table-checksum --replicate=percona.checksums --ignore-databases mysql h=localhost,u=checksum_user,p=checksum_password TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 07-11T18:30:13 0 1 10 1 0 1.044 test.dummy

This needs to be executed on the master. The user and password you specify will be used to not only connect to the master but the slaves as well. You need the following privileges for the pt-table-checksum mysql user:

mysql-master> GRANT REPLICATION SLAVE,PROCESS,SUPER, SELECT ON *.* TO `checksum_user`@'%' IDENTIFIED BY 'checksum_password'; mysql-master> GRANT ALL PRIVILEGES ON percona.* TO `checksum_user`@'%';

Earlier, in pt-table-checksum command, I used –replicate option which writes replication queries to mentioned table percona.checksums. Next I passed  –ignore-databases option which accepts comma separated list of databases to ignore. Moreover, –create-replicate-table and —empty-replicate-table options are “Yes” by default and you can specify both options explicitly if you want to create database table different then percona.checksums.

pt-table-checksum reported 1 DIFF which is number of chunks which are different from master on one or more slaves. You can find details about tabular columns e.g. TS, ERRORS and so on on documentation of pt-table-checksum. After that, I ran next command to identify which table has difference on slave.

[root@master]# pt-table-checksum --replicate=percona.checksums --replicate-check-only --ignore-databases mysql h=localhost,u=checksum_user,p=checksum_password Differences on slave TABLE CHUNK CNT_DIFF CRC_DIFF CHUNK_INDEX LOWER_BOUNDARY UPPER_BOUNDARY test.dummy 1 -5 1

In this command I used –replicate-check-only option which only reports the tables with having differences vice versa only checksum differences on detected replicas are printed. It doesn’t checksum any tables. It checks replicas for differences found by previous checksumming, and then exits.

You may also login to the slave and also execute below query to find out which tables have inconsistencies.

mysql-slave> SELECT db, tbl, SUM(this_cnt) AS total_rows, COUNT(*) AS chunks FROM percona.checksums WHERE ( master_cnt <> this_cnt OR master_crc <> this_crc OR ISNULL(master_crc) <> ISNULL(this_crc)) GROUP BY db, tbl;

pt-table-checksum identified test.dummy table is different on slave now we are going to use pt-table-sync tool to synchronize table data between MySQL servers.

[root@slave]# pt-table-sync --print --replicate=percona.checksums --sync-to-master h=localhost,u=checksum_user,p=checksum_password REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('6', 'f') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum_user dst_db:test dst_tbl:dummy dst_dsn:h=localhost,p=...,u=checksum_user lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:24683 user:root host:slave*/; REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('7', 'g') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum_user dst_db:test dst_tbl:dummy dst_dsn:h=localhost,p=...,u=checksum_user lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:24683 user:root host:slave*/; REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('8', 'h') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum_user dst_db:test dst_tbl:dummy dst_dsn:h=localhost,p=...,u=checksum_user lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:24683 user:root host:slave*/; REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('9', 'i') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum_user dst_db:test dst_tbl:dummy dst_dsn:h=localhost,p=...,u=checksum_user lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:24683 user:root host:slave*/; REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('10', 'j') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum_user dst_db:test dst_tbl:dummy dst_dsn:h=localhost,p=...,u=checksum_user lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:24683 user:root host:slave*/;

I ran the pt-table-sync tool from an opposite host this time i.e. from the slave as I used the –sync-to-master option which treats DSN as slave and syncs to master. Again, pt-table-sync will use the mysql username and password you specify to connect to the slave as well as to its master. –replicate option here examines the specified table to find out the data differences and –print just prints the SQL  (REPLACE queries) not actually executes it.

You may audit the queries before executing to sync data between master/slave.  You may see it printed only missing records on the slave. Once you are happy with the results, you can substitute –print with –execute to do actual synchronization.

As a reminder, these queries always executed on the master as this is the only safe way to do the changes on slave. However, on the master it’s no-op changes as these records already exists on master but then falls to slave via replication stream to sync it with master.

If you find lots of differences on your slave server it may lag during synchronization of those changes. As I mentioned earlier, you can use –print option to go through your queries which are going to be executed to sync slave with master server. I found this post useful if you see a huge difference in the table between master/slave(s).

Note, you may use the –dry-run option initially which only analyzes print information about the sync algorithm and then exits. It shows verbose output; it doesn’t do any changes though. –dry-run parameter will basically instruct pt-table-sync to not actually do the sync, but just perform some checks.

Let me present another replication topology, where the master has two slaves where slave2 is running on non-default port 3307 while master and slave1 running on port 3306. Further, slave2 is out of sync with master and I will show you how to sync slave2 which running on port 3307 with master.

mysql-master> SELECT * FROM dummy; +----+------+ | id | name | +----+------+ | 1 | a | | 2 | b | | 3 | c | | 4 | d | | 5 | e | +----+------+ 5 rows in set (0.00 sec) mysql-slave1> SELECT * FROM test.dummy; +----+------+ | id | name | +----+------+ | 1 | a | | 2 | b | | 3 | c | | 4 | d | | 5 | e | +----+------+ 5 rows in set (0.00 sec) mysql-slave2> SELECT * FROM test.dummy; +----+------+ | id | name | +----+------+ | 1 | a | | 2 | b | | 3 | c | +----+------+

Let’s run pt-table-checksum tool on master database server.

[root@master]# pt-table-checksum --replicate percona.checksums --ignore-databases=mysql h=192.168.0.130,u=checksum_user,p=checksum_password --recursion-method=dsn=D=percona,t=dsns TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 07-23T13:57:39 0 0 2 1 0 0.310 percona.dsns 07-23T13:57:39 0 1 5 1 0 0.036 test.dummy

I used –recursion-method parameter this time which is method to use find slaves in replication stream and it’s pretty useful when your servers run on non-standard port i.e. other than 3306. I created dsns table under percona database with following entries. You may find dsns table structure in documentation.

mysql> SELECT * FROM dsns; +----+-----------+------------------------------------------------------------+ | id | parent_id | dsn | +----+-----------+------------------------------------------------------------+ | 1 | 1 | h=192.168.0.134,u=checksum_user,p=checksum_password,P=3306 | | 2 | 2 | h=192.168.0.132,u=checksum_user,p=checksum_password,P=3307 | +----+-----------+------------------------------------------------------------+

Next I ran below pt-table-checksum command to identify which slave server has differences on test.dummy table.

[root@master]# pt-table-checksum --replicate=percona.checksums --replicate-check-only --ignore-databases=mysql h=192.168.0.130,u=checksum_user,p=checksum_password --recursion-method=dsn=D=percona,t=dsns Differences on slave2 TABLE CHUNK CNT_DIFF CRC_DIFF CHUNK_INDEX LOWER_BOUNDARY UPPER_BOUNDARY test.dummy 1 -2 1

This shows that slave2 has different data on test.dummy table as compared to the master. Now let’s run pt-table-sync tool to sync those differences and make slave2 identical as the master.

[root@slave2] ./pt-table-sync --print --replicate=percona.checksums --sync-to-master h=192.168.0.132,u=checksum_user,p=checksum_password REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('4', 'd') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum dst_db:test dst_tbl:dummy dst_dsn:h=192.168.0.132,p=...,u=checksum lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:1514 user:root host:slave2*/; REPLACE INTO `test`.`dummy`(`id`, `name`) VALUES ('5', 'e') /*percona-toolkit src_db:test src_tbl:dummy src_dsn:P=3306,h=192.168.0.130,p=...,u=checksum dst_db:test dst_tbl:dummy dst_dsn:h=192.168.0.132,p=...,u=checksum lock:1 transaction:1 changing_src:percona.checksums replicate:percona.checksums bidirectional:0 pid:1514 user:root host:slave2*/;

It shows 2 rows are different on slave2. Substituting –print with –execute synchronized the differences on slave2 and re-running pt-table-checksum tool shows no more differences.

Conclusion:
pt-table-checksum and pt-table-sync are the finest tools from Percona Toolkit to validate data between master/slave(s). With the help of these tools you can easily identify data drifts and fix them. I mentioned a couple of replication topologies above about how to check replication consistency and how to fix it in case of data drift. You may script pt-table-checksum / pt-table-sync steps and cron checksum script to periodically check the data consistency within replication stream.

This procedure is only safe for a single level master-slave(s) hierarchy. I will discuss the procedure for other topologies in future posts – i.e. I will describe more complex scenarios on how to use these tools in chain replication i.e. master -> slave1 -> slave2 pair and in Percona XtraDB Cluster setup.

The post MySQL replication primer with pt-table-checksum and pt-table-sync appeared first on MySQL Performance Blog.

Categories: MySQL

MySQL QA Episode 12: My server is crashing… Now what? For customers or users experiencing a crash

MySQL Performance Blog - Tue, 2015-08-11 10:00

My server is crashing… Now what?

This special episode in the MySQL QA Series is for customers or users experiencing a crash.

  1. A crash?
    1. Cheat sheet: https://goo.gl/rrmB9i
    2. Sever install & crash. Note this is as a demonstration: do not action this on a production server!
      sudo yum install -y http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
      sudo yum install -y Percona-Server-client-56 Percona-Server-server-56
      sudo service mysql start
  2. Gimme Stacks!
    1. Debug info packages (can be executed on a production system, but do match your 5.5, 5.6 or 5.7 version correctly)
      sudo yum install -y Percona-Server-56-debuginfo
  3. Testcase?

Full-screen viewing @ 720p resolution recommended.

Also checkout the full MySQL QA Series!

The post MySQL QA Episode 12: My server is crashing… Now what? For customers or users experiencing a crash appeared first on MySQL Performance Blog.

Categories: MySQL

The MySQL query cache: Worst enemy or best friend?

MySQL Performance Blog - Fri, 2015-08-07 10:00

During the last couple of months I have been involved in an unusually high amount of performance audits for e-commerce applications running with Magento. And although the systems were quite different, they also had one thing in common: the MySQL query cache was very useful. That was counter-intuitive for me as I’ve always expected the query cache to be such a bottleneck that response time is better when the query cache is turned off no matter what. That lead me to run a few experiments to better understand when the query cache can be helpful.

Some context

The query cache is well known for its contentions: a global mutex has to be acquired for any read or write operation, which means that any access is serialized. This was not an issue 15 years ago, but with today’s multi-core servers, such serialization is the best way to kill performance.

However from a performance point of view, any query cache hit is served in a few tens of microseconds while the fastest access with InnoDB (primary lookup) still requires several hundreds of microseconds. Yes, the query cache is at least an order of magnitude faster than any query that goes to InnoDB.

A simple test

To better understand how good or bad the query cache can be, I set up a very simple benchmark:

  • 1M records were inserted in 16 tables.
  • A moderate write load (65 updates/s) was run with a modified version of the update_index.lua sysbench script (see the end of the post for the code).
  • The select.lua sysbench script was run, with several values for the --num-threads option.

Note that the test is designed to be unfavorable to the query cache as the whole dataset fits in the buffer pool and the SELECT statements are very simple. Also note that I configured the query cache to be large enough so that no entry was evicted from the cache due to low memory.

Results – MySQL query cache ON

First here are the results when the query cache is enabled:

This configuration scales well up to 4 concurrent threads, but then the throughput degrades very quickly. With 10 concurrent threads, SHOW PROCESSLIST is enough to show you that all threads spend all their time waiting for the query cache mutex. Okay, this is not a surprise.

Results – MySQL query cache OFF

When the query cache is disabled, this is another story:

Throughput scales well up to somewhere between 10 and 20 threads (for the record the server I was using had 16 cores). But more importantly, even at the higher concurrencies, the overall throughput continued to increase: at 20 concurrent threads, MySQL was able to serve nearly 3x more queries without the query cache.

Conclusion

With Magento, you can expect to have a light write workload, very low concurrency and also quite complex SELECT statements. Given the results of our simple benchmarks, it is finally not that surprising that the MySQL query cache is a good fit in this case.

It is also worth noting that many applications run a database workload where writes are light and concurrency is low: the query cache should then not be discarded immediately. And maybe it is time for Oracle to make plans to improve the query cache as suggested by Peter a few years ago?

Annex: sysbench commands

# Modified update_index.lua function event(thread_id) local table_name table_name = "sbtest".. sb_rand_uniform(1, oltp_tables_count) rs = db_query("UPDATE ".. table_name .." SET k=k+1 WHERE id=" .. sb_rand(1, oltp_table_size)) db_query("SELECT SLEEP(0.015)") end

# Populate the tables sysbench --mysql-socket=/data/mysql/mysql.sock --mysql-user=root --mysql-db=db1 --oltp-table-size=1000000 --oltp-tables-count=16 --num-threads=16 --test=/usr/share/doc/sysbench/tests/db/insert.lua prepare # Write workload sysbench --mysql-socket=/data/mysql/mysql.sock --mysql-user=root --mysql-db=db1 --oltp-tables-count=16 --num-threads=1 --test=/usr/share/doc/sysbench/tests/db/update_index.lua --max-requests=1000000 run # Read workload sysbench --mysql-socket=/data/mysql/mysql.sock --mysql-user=root --mysql-db=db1 --oltp-tables-count=16 --num-threads=1 --test=/usr/share/doc/sysbench/tests/db/select.lua --max-requests=10000000 run

The post The MySQL query cache: Worst enemy or best friend? appeared first on MySQL Performance Blog.

Categories: MySQL

PXC – Incremental State transfers in detail

MySQL Performance Blog - Wed, 2015-08-05 10:00
IST Basics

State transfers in Galera remain a mystery to most people.  Incremental State transfers (as opposed to full State Snapshot transfers) are used under the following conditions:

  • The Joiner node reports Galera a valid Galera GTID to the cluster
  • The Donor node selected contains all the transactions the Joiner needs to catch up to the rest of the cluster in its Gcache
  • The Donor node can establish a TCP connection to the Joiner on port 4568 (by default)
IST states

Galera has many internal node states related to Joiner nodes.  They currently are:

  1. Joining
  2. Joining: preparing for State Transfer
  3. Joining: requested State Transfer
  4. Joining: receiving State Transfer
  5. Joining: State Transfer request failed
  6. Joining: State Transfer failed
  7. Joined

I don’t claim any special knowledge of most of these states apart from what their titles indicate.  Many of these states are occur very briefly and it is unlikely you’ll ever actually see them on a node’s wsrep_local_state_comment.

During IST, however, I have observed the following states have the potential to take a long while:

Joining: receiving State Transfer

During this state transactions are being streamed to the Joiner’s wsrep_local_recv_queue.  You can connect to the node at this time and poll state.  If you do, you’ll easily see the inbound queue increasing (usually quickly) but no writesets being ‘received’ (read: applied).  It’s not clear to me if there is a reason why transction apply couldn’t be started during this steam, but it does not do so currently.

The further behind the Joiner is, the longer this can take.  Here’s some output from the latest release of myq-tools showing wsrep stats:

[root@node2 ~]# myq_status wsrep mycluster / node2 (idx: 1) / Galera 3.11(ra0189ab) Cluster Node Outbound Inbound FlowC Conflct Gcache Appl time P cnf # stat laten msgs data que msgs data que pause snt lcf bfa ist idx %ef 14:04:40 P 4 2 J:Rc 0.4ms 0 0b 0 1 197b 4k 0ns 0 0 0 367k 0 94% 14:04:41 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 5k 0ns 0 0 0 368k 0 93% 14:04:42 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 6k 0ns 0 0 0 371k 0 92% 14:04:43 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 7k 0ns 0 0 0 373k 0 92% 14:04:44 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 8k 0ns 0 0 0 376k 0 92% 14:04:45 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 10k 0ns 0 0 0 379k 0 92% 14:04:46 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 11k 0ns 0 0 0 382k 0 92% 14:04:47 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 12k 0ns 0 0 0 386k 0 91% 14:04:48 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 13k 0ns 0 0 0 390k 0 91% 14:04:49 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 14k 0ns 0 0 0 394k 0 91% 14:04:50 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 15k 0ns 0 0 0 397k 0 91% 14:04:51 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 16k 0ns 0 0 0 401k 0 91% 14:04:52 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 18k 0ns 0 0 0 404k 0 91% 14:04:53 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 19k 0ns 0 0 0 407k 0 91% 14:04:54 P 4 2 J:Rc 0.4ms 0 0b 0 0 0b 20k 0ns 0 0 0 411k 0 91%

The node is in ‘J:Rc’ (Joining: Receiving) state and we can see the Inbound queue growing (wsrep_local_recv_queue). Otherwise this node is not sending or receiving transactions.

Joining

Once all the requested transactions are copied over, the Joiner flips to the ‘Joining’ state, during which it starts applying the transactions as quickly as the wsrep_slave_threads can go.  For example:

Cluster Node Outbound Inbound FlowC Conflct Gcache Appl time P cnf # stat laten msgs data que msgs data que pause snt lcf bfa ist idx %ef 14:04:55 P 4 2 Jing 0.6ms 0 0b 0 2243 3.7M 19k 0ns 0 0 0 2236 288 91% 14:04:56 P 4 2 Jing 0.5ms 0 0b 0 4317 7.0M 16k 0ns 0 0 0 6520 199 92% 14:04:57 P 4 2 Jing 0.5ms 0 0b 0 4641 7.5M 12k 0ns 0 0 0 11126 393 92% 14:04:58 P 4 2 Jing 0.4ms 0 0b 0 4485 7.2M 9k 0ns 0 0 0 15575 200 93% 14:04:59 P 4 2 Jing 0.5ms 0 0b 0 4564 7.4M 5k 0ns 0 0 0 20102 112 93%

Notice the Inbound msgs (wsrep_received) starts increasing rapidly and the queue decreases accordingly.

Joined

14:05:00 P 4 2 Jned 0.5ms 0 0b 0 4631 7.5M 2k 0ns 0 0 0 24692 96 94%

Towards the end the node briefly switches to the ‘Joined’ state, though that is a fast state in this case. ‘Joining’ and ‘Joined’ are similar states, the difference (I believe) is that:

  • ‘Joining’ is applying transactions acquired via the IST
  • ‘Joined’ is applying transactions that have queued up via standard Galera replication since the IST (i.e., everything has been happening on the cluster since the IST)
Flow control during Joining/Joined states

The Codership documentation says something interesting about ‘Joined’ (from experimentation, I believe the ‘Joining’ state behaves the same here.):

Nodes in this state can apply write-sets. Flow Control here ensures that the node can eventually catch up with the cluster. It specifically ensures that its write-set cache never grows. Because of this, the cluster wide replication rate remains limited by the rate at which a node in this state can apply write-sets. Since applying write-sets is usually several times faster than processing a transaction, nodes in this state hardly ever effect cluster performance.

What this essentially means is that a Joiner’s wsrep_local_recv_queue is allowed to shrink but NEVER GROW during an IST catchup.  Growth will trigger flow control, but why would it grow?  Writes on other cluster nodes must still be replicated to our Joiner and added to the queue.

If the Joiner’s apply rate is less than the rate of writes coming from Cluster replication, flow control will be applied to slow down Cluster replication (read: your application writes).  As far as I can tell, there is no way to tune this or turn it off.  The Codership manual continues here:

The one occasion when nodes in the JOINED state do effect cluster performance is at the very beginning, when the buffer pool on the node in question is empty.

Essentially a Joiner node with a cold cache can really hurt performance on your cluster.  This can possibly be improved by:

  • Better IO and other resources available to the Joiner for a quicker cache warmup.  A huge example of this would be flash over convention storage.
  • Buffer pool preloading
  • More Galera apply threads
  • etc.
Synced

From what I can tell, the ‘Joined’ state ends when the wsrep_local_recv_queue drops lower than the node’s configured flow control limit.  At that point it changes to ‘Synced’ and the node behaves more normally (WRT to flow control).

Cluster Node Outbound Inbound FlowC Conflct Gcache Appl time P cnf # stat laten msgs data que msgs data que pause snt lcf bfa ist idx %ef 14:05:01 P 4 2 Sync 0.5ms 0 0b 0 3092 5.0M 0 0ns 0 0 0 27748 150 94% 14:05:02 P 4 2 Sync 0.5ms 0 0b 0 1067 1.7M 0 0ns 0 0 0 28804 450 93% 14:05:03 P 4 2 Sync 0.5ms 0 0b 0 1164 1.9M 0 0ns 0 0 0 29954 67 92% 14:05:04 P 4 2 Sync 0.5ms 0 0b 0 1166 1.9M 0 0ns 0 0 0 31107 280 92% 14:05:05 P 4 2 Sync 0.5ms 0 0b 0 1160 1.9M 0 0ns 0 0 0 32258 606 91% 14:05:06 P 4 2 Sync 0.5ms 0 0b 0 1154 1.9M 0 0ns 0 0 0 33401 389 90% 14:05:07 P 4 2 Sync 0.5ms 0 0b 0 1147 1.8M 1 0ns 0 0 0 34534 297 90% 14:05:08 P 4 2 Sync 0.5ms 0 0b 0 1147 1.8M 0 0ns 0 0 0 35667 122 89% 14:05:09 P 4 2 Sync 0.5ms 0 0b 0 1121 1.8M 0 0ns 0 0 0 36778 617 88%

Conclusion

You may notice these states during IST if you aren’t watching the Joiner closely, but if your IST is talking a long while, it should be easy using the above situation to understand what is happening.

The post PXC – Incremental State transfers in detail appeared first on MySQL Performance Blog.

Categories: MySQL

MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How

MySQL Performance Blog - Tue, 2015-08-04 10:00

Today’s episode is all about Valgrind – from the pro’s to the con’s, from the why to the how! This episode will be of interest to anyone who is or wants to work with Valgrind on a regular or semi-regular basis.

  1. Pro’s/Why
  2. Con’s
  3. How
    1. Using the latest version
      sudo [yum/apt-get] install valgrind
      #OR#
      sudo [yum/apt-get] remove valgrind
      sudo [yum/apt-get] install bzip2 glibc*
      wget http://valgrind.org/downloads/valgrind-3.10.1.tar.bz2
      tar -xf valgrind-3.10.1.tar.bz2; cd valgrind-3.10.1
      ./configure; make; sudo make install
      valgrind –version # This should now read 3.10.1
    2. VGDB (cd ./mysql-test)
      ./lib/v1/mysql-test-run.pl –start-and-exit –valgrind –valgrind-option=”–leak-check=yes”
      –valgrind-option=”–vgdb=yes” –valgrind-option=”–vgdb-error=0″ 1st
      #OR# valgrind –leak-check=yes –vgdb=yes –vgdb-error=0 ./mysqld –options…
      #ALSO CONSIDER# –num-callers=40 –show-reachable=yes –track-origins=yes
      gdb /Percona-server/bin/mysqld
      (gdb) target remote | vgdb
      (gdb) bt
    3. pquery Valgrind
    4. Valgrind stacks overview & analysis

Full-screen viewing @ 720p resolution recommended.

 

The post MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How appeared first on MySQL Performance Blog.

Categories: MySQL
Syndicate content