MySQL

Replication Triggers a Performance Schema Issue on Percona XtraDB Cluster

MySQL Performance Blog - Fri, 2016-10-21 21:07

In this blog post, we’ll look at how replication triggers a Performance Schema issue on Percona XtraDB Cluster.

During an upgrade to Percona XtraDB Cluster 5.6, I faced an issue that I wanted to share. In this environment, we set up three Percona XtraDB Cluster nodes (mostly configured as default), copied from a production server. We configured one of the members of the cluster as the slave of the production server.

During the testing process, we found that a full table scan query was taking four times less in the nodes where replication was not configured. After reviewing mostly everything related to the query, we decided to use perf.

We executed:

perf record -a -g -F99 -p $(pidof mysqld) -- sleep 60

And the query in another terminal a couple of times. Then we executed:

perf report > perf.out

And we found in the perf.out this useful information:

# To display the perf.data header info, please use --header/--header-only options. # # Samples: 5K of event 'cpu-clock' # Event count (approx.): 57646464070 # # Children Self Command Shared Object Symbol # ........ ........ ....... .................. ............................................................................................................................................................ # 62.03% 62.01% mysqld mysqld [.] my_timer_cycles | ---my_timer_cycles 4.66% 4.66% mysqld mysqld [.] 0x00000000005425d4 | ---0x5425d4 4.66% 0.00% mysqld mysqld [.] 0x00000000001425d4 | ---0x5425d4 3.31% 3.31% mysqld mysqld [.] 0x00000000005425a7 | ---0x5425a7

As you can see, the my_timer_cycles function took 62.03% of the time. Related to this, we found a blog (http://dtrace.org/blogs/brendan/2011/06/27/viewing-the-invisible/) that explained how after enabling the Performance Schema, the performance dropped 10%. So, we decided to disable Performance Schema in order to see if this issue was related to the one described in the blog. We found that after the restart required by disabling Performance Schema, the query was taking the expected amount of time.

We also found out that this was triggered by replication, and nodes rebuilt from this member might have this issue. It was the same if you rebuilt from a member that was OK: the new member might execute the query slower.

Finally, you should take into account that my_timer_cycles seems to be called on a per-row basis, so if your dataset is small you will never notice this issue. However, if you are doing a full table scan of a million row table, you could face this issue.

Conclusion

If you are having query performance issues, and you can’t find the root cause, try disabling or debugging instruments from the Performance Schema to see if that is causing the issue.

Categories: MySQL

Percona Responds to East Coast DDoS Attack

MySQL Performance Blog - Fri, 2016-10-21 20:42

As noted in several media outlets, many web sites have been affected by a DDoS attack on Dyn today. Since Percona uses Dyn for its DNS server, we are experiencing issues as well.  

The attack has impacted the percona.com web site availability and performance, including all related services such as our forums, blogs and downloads.

Our first response was to wait it out, and trust the Dyn team to deal with the attack — they have to handle issues like this all the time, and are generally pretty good at resolving these issues quickly. This was not the case today.

As such, to restore service, we have added another DNS provider (DNS Made Easy). This has restored connectivity for the majority of users, and the situation should continue to improve as the changed list of DNS servers propagates (check current status).

Our customer support site, Zendesk, has also been impacted by the DDoS attack. We are using a similar strategy to remedy our support site. You can see the current status for Zendesk’s domain resolution here.

For additional information about this incident from Dyn check out official Dyn incident status page

If you’re a Percona customer and have trouble accessing our Customer Support portal, do not hesitate to call or Skype us instead.

Thank you for your patience. We will provide updates as the situation develops.

Categories: MySQL

Percona Server 5.7.15-9 is now available

MySQL Performance Blog - Fri, 2016-10-21 16:14

Percona announces the GA release of Percona Server 5.7.15-9 on October 21, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.7.15, including all the bug fixes in it, Percona Server 5.7.15-9 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.15-9 milestone at Launchpad.

New Features Bugs Fixed
  • Audit Log Plugin malformed record could be written after audit_log_flush was set to ON in ASYNC and PERFORMANCE modes. Bug fixed #1613650.
  • Running SELECT DISTINCT x...ORDER BY y LIMIT N,N could lead to a server crash. Bug fixed #1617586.
  • Workloads with statements that take non-transactional locks (LOCK TABLES, global read lock, and similar) could have caused deadlocks when running under Thread Pool with high priority queue enabled and thread_pool_high_prio_mode set to transactions. Fixed by placing such statements into the high priority queue even with the above thread_pool_high_prio_mode setting. Bugs fixed #1619559 and #1374930.
  • Fixed memory leaks in Audit Log Plugin. Bug fixed #1620152 (upstream #71759).
  • Server could crash due to a glibc bug in handling short-lived detached threads. Bug fixed #1621012 (upstream #82886).
  • QUERY_RESPONSE_TIME_READ and QUERY_RESPONSE_TIME_WRITE were returning QUERY_RESPONSE_TIME table data if accessed through a name that is not full uppercase. Bug fixed #1552428.
  • Cipher ECDHE-RSA-AES128-GCM-SHA256 was listed in the list of supported ciphers but it wasn’t supported. Bug fixed #1622034 (upstream #82935).
  • Successful recovery of a torn page from the doublewrite buffer was showed as a warning in the error log. Bug fixed #1622985.
  • LRU manager threads could run too long on a server shutdown, causing a server crash. Bug fixed #1626069.
  • tokudb_default was not recognized by Percona Server as a valid row format. Bug fixed #1626206.
  • InnoDB ANALYZE TABLE didn’t remove its table from the background statistics processing queue. Bug fixed #1626441 (upstream #71761).
  • Upstream merge for #81657 to 5.6 was incorrect. Bug fixed #1626936 (upstream #83124).
  • Fixed multi-threaded slave thread leaks that happened in case of thread create failure. Bug fixed #1619622 (upstream #82980).
  • Shutdown waiting for a purge to complete was undiagnosed for the first minute. Bug fixed #1616785.

Other bugs fixed: #1614439, #1614949, #1624993 (#736), #1613647, #1615468, #1617828, #1617833, #1626002 (upstream #83073), #904714, #1610102, #1610110, #1613728, #1614885, #1615959, #1616333, #1616404, #1616768, #1617150, #1617216, #1617267, #1618478, #1618819, #1619547, #1619572, #1620583, #1622449, #1623011, #1624992 (#1014), #735, #1626500, #1628913, #952920, and #964.

NOTE: If you haven’t already, make sure to add the new Debian/Ubuntu repository signing key.

The release notes for Percona Server 5.7.15-9 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Categories: MySQL

Life on Mobile: From Android to iOS

Xaprb, home of innotop - Fri, 2016-10-21 01:46

I’ve been living an increasingly mobile life lately, by which I mean laptops aren’t my sole or even primary computing device anymore. For the last two years I’ve been doing more computing on mobile devices than laptops or desktops. I’m writing this post on my iPad Pro. At the same time, I’m a heavy user of old-school technologies: command line, LaTeX, terminal-based editors, etc. I find it interesting that my life is increasingly possible to run on a mobile device, while at the same time my laptop remains absolutely indispensable. As I thought about this, I found myself organizing the last few years’ worth of lessons into distinct categories, which I’m strangely compelled to write into blog posts. So, here goes! In the first edition we will follow Baron on his adventures as he transitions from Android to iOS.

Why switch from Android?

In the old days I had a flip phone, and at some point I upgraded to an Android smartphone. It was a Verizon Droid with a slide-out physical keyboard. Then after a few years I switched to iPhone.

I switched to the Apple ecosystem for the same reason I switched from Linux on PC laptop to Macs: I was frustrated with the low-quality experience. I wanted to stop fussing with things that seemed device-specific, problems that only happened because the OS and the device were wedged together instead of being designed for each other from the start. Android always reminded me of Dell’s version of Windows 98, which came with a bunch of junk preinstalled and needed third-party drivers to work.

On Android, apps crashed constantly, battery life was poor, and most frustrating of all, the manufacturers seemed hell-bent on forcing me to upgrade my device by making the OS stop functioning on the old device. I had two subsequent phones that were forced to install over-the-air updates and rendered the phone basically unusable. Performance slowed to such a crawl that interactions with the phone had lags of up to a few seconds.

There was also a bunch of junk installed: some stupid NFL (American football) games, a “Verizon Navigator” app that I’d never use, since I’d want to use the Google Maps app. The whole experience reminded me of an AOL browser toolbar more than anything… these were system-installed apps that couldn’t be uninstalled, which led to the strong feeling that I had paid for the phone but was not allowed to truly own my experience of using it. It didn’t feel respectful or user-centric to me.

Topping that off was the lack of ability to control what apps accessed on my phone. It seemed like every time I installed an app, the installation dialog notified me that it required access to a bunch of stuff I didn’t want to give it, things like my contacts, which I knew could only be useful for unethical growth-hacking. (I was soon vindicated, as there were scandals with many major apps using this data in ways the users didn’t authorize.) In my view, this design decision by Android was a strong indication of who owned whom.

Switching to iPhone

The second time a phone stopped working because of a forced OS upgrade, I decided it could not be any worse, and I switched to iPhone.

I was immediately delighted. The iPhone is higher performance, better designed, has better battery life, etc. It has no third-party apps installed. There is a better variety of apps and most of them actually work instead of crashing constantly. They are, by and large, much more polished. (As an app developer, it must be much easier to design for a few phones than the thousands that use Android.)

On iPhone, the OS and the hardware are clearly designed for each other. There’s no weirdness with screen sizes or location of buttons. There’s ONE way to do things, not a million. (I’ve seen a bunch of my friends and family members getting confused when using each others’ Android phones; there’s no single Android, and apps and OS features you get used to are different or missing on someone else’s phone.)

Some things were odd at first, but quickly made sense. Preferences, for example. On Android, each app has its own preferences, and many of them choose very different designs and navigation flows for them. On iPhone, most apps use the system preferences dialogs. (The ones that don’t are rarely well designed.)

The iPhone gives the user control of privacy and what apps are allowed to access. It’s a clear existence proof that apps can be made to deal with the privileges the user grants them. An app that wants to access your contacts asks for that, and you can allow or deny it. Your choice is remembered, but can be changed later. This ranges from obvious things (access to camera, access to photos, access to contacts) to things you might not have thought about: app notifications, for example, are unified. Don’t want annoyances from the Twitter app? Turn them off with a single permission. In Android, that’s something you do, with great effort, within the Twitter app itself. I vividly remember Twitter adding new types of notifications, which I’d then start getting and have to go in and disable individually on Android. (No, I don’t want you to alert me when “a company I trust” tweets something “important”! Especially since I’m not even following that company!)

The iPhone also has great built-in apps for the most important things I do in my life. I’ve found myself using Notes and Reminders constantly. In fact, I practically run my life with those two apps. What I need to do tomorrow, reference material, things to follow up with, groceries, take out the trash every Wednesday night, even this blog post draft: all in Notes and Reminders. Maybe things have changed in the last 16 months, but I never saw apps on Android that worked that well, let alone built-in ones.

After about a year with the iPhone, I was tempted to buy an iPad Air, and I wasn’t disappointed. It quickly became a tool for my evenings: quicker and lighter to use for the common tasks after supper, such as reading, email, jotting down notes for tomorrow, and many other activities. I’ve owned a variety of Android tablets for a few years, and the iPad was a big step up from them. After a while I found myself using my personal iPad so much for professional work that I bought an iPad Pro, which I thought would be a MacBook replacement. I’ll write more about that later.

Conclusion

In conclusion, I’ve been much happier and much more productive on iPhone, iPad, and iPad Pro than I was on Android phones and tablets. Because the devices and apps work so much better, and the experience is so much more pleasurable, I use them a lot more and I get a lot more done. For the way I use them, my iPhone and iPad are huge productivity levers, allowing me to achieve much more in a day’s work than I did previously.

Next time I’ll write about the apps I use on iPhone and iPad for everything from source-code editing to graphic design. I’ve been able to figure out the boundaries of what works and doesn’t, and therefore which tasks I can perform on mobile, and what remains on laptop/desktop devices. Spoiler: I can’t get rid of my MacBook yet. But I wrote, edited, put into Git, and committed and pushed this blog post entirely from iPad.

Photo Credit

Categories: MySQL

Life on Mobile: From Android to iOS

Xaprb, home of innotop - Fri, 2016-10-21 01:46

I’ve been living an increasingly mobile life lately, by which I mean laptops aren’t my sole or even primary computing device anymore. For the last two years I’ve been doing more computing on mobile devices than laptops or desktops. I’m writing this post on my iPad Pro. At the same time, I’m a heavy user of old-school technologies: command line, LaTeX, terminal-based editors, etc. I find it interesting that my life is increasingly possible to run on a mobile device, while at the same time my laptop remains absolutely indispensable. As I thought about this, I found myself organizing the last few years’ worth of lessons into distinct categories, which I’m strangely compelled to write into blog posts. So, here goes! In the first edition we will follow Baron on his adventures as he transitions from Android to iOS.

Why switch from Android?

In the old days I had a flip phone, and at some point I upgraded to an Android smartphone. It was a Verizon Droid with a slide-out physical keyboard. Then after a few years I switched to iPhone.

I switched to the Apple ecosystem for the same reason I switched from Linux on PC laptop to Macs: I was frustrated with the low-quality experience. I wanted to stop fussing with things that seemed device-specific, problems that only happened because the OS and the device were wedged together instead of being designed for each other from the start. Android always reminded me of Dell’s version of Windows 98, which came with a bunch of junk preinstalled and needed third-party drivers to work.

On Android, apps crashed constantly, battery life was poor, and most frustrating of all, the manufacturers seemed hell-bent on forcing me to upgrade my device by making the OS stop functioning on the old device. I had two subsequent phones that were forced to install over-the-air updates and rendered the phone basically unusable. Performance slowed to such a crawl that interactions with the phone had lags of up to a few seconds.

There was also a bunch of junk installed: some stupid NFL (American football) games, a “Verizon Navigator” app that I’d never use, since I’d want to use the Google Maps app. The whole experience reminded me of an AOL browser toolbar more than anything… these were system-installed apps that couldn’t be uninstalled, which led to the strong feeling that I had paid for the phone but was not allowed to truly own my experience of using it. It didn’t feel respectful or user-centric to me.

Topping that off was the lack of ability to control what apps accessed on my phone. It seemed like every time I installed an app, the installation dialog notified me that it required access to a bunch of stuff I didn’t want to give it, things like my contacts, which I knew could only be useful for unethical growth-hacking. (I was soon vindicated, as there were scandals with many major apps using this data in ways the users didn’t authorize.) In my view, this design decision by Android was a strong indication of who owned whom.

Switching to iPhone

The second time a phone stopped working because of a forced OS upgrade, I decided it could not be any worse, and I switched to iPhone.

I was immediately delighted. The iPhone is higher performance, better designed, has better battery life, etc. It has no third-party apps installed. There is a better variety of apps and most of them actually work instead of crashing constantly. They are, by and large, much more polished. (As an app developer, it must be much easier to design for a few phones than the thousands that use Android.)

On iPhone, the OS and the hardware are clearly designed for each other. There’s no weirdness with screen sizes or location of buttons. There’s ONE way to do things, not a million. (I’ve seen a bunch of my friends and family members getting confused when using each others’ Android phones; there’s no single Android, and apps and OS features you get used to are different or missing on someone else’s phone.)

Some things were odd at first, but quickly made sense. Preferences, for example. On Android, each app has its own preferences, and many of them choose very different designs and navigation flows for them. On iPhone, most apps use the system preferences dialogs. (The ones that don’t are rarely well designed.)

The iPhone gives the user control of privacy and what apps are allowed to access. It’s a clear existence proof that apps can be made to deal with the privileges the user grants them. An app that wants to access your contacts asks for that, and you can allow or deny it. Your choice is remembered, but can be changed later. This ranges from obvious things (access to camera, access to photos, access to contacts) to things you might not have thought about: app notifications, for example, are unified. Don’t want annoyances from the Twitter app? Turn them off with a single permission. In Android, that’s something you do, with great effort, within the Twitter app itself. I vividly remember Twitter adding new types of notifications, which I’d then start getting and have to go in and disable individually on Android. (No, I don’t want you to alert me when “a company I trust” tweets something “important”! Especially since I’m not even following that company!)

The iPhone also has great built-in apps for the most important things I do in my life. I’ve found myself using Notes and Reminders constantly. In fact, I practically run my life with those two apps. What I need to do tomorrow, reference material, things to follow up with, groceries, take out the trash every Wednesday night, even this blog post draft: all in Notes and Reminders. Maybe things have changed in the last 16 months, but I never saw apps on Android that worked that well, let alone built-in ones.

After about a year with the iPhone, I was tempted to buy an iPad Air, and I wasn’t disappointed. It quickly became a tool for my evenings: quicker and lighter to use for the common tasks after supper, such as reading, email, jotting down notes for tomorrow, and many other activities. I’ve owned a variety of Android tablets for a few years, and the iPad was a big step up from them. After a while I found myself using my personal iPad so much for professional work that I bought an iPad Pro, which I thought would be a MacBook replacement. I’ll write more about that later.

Conclusion

In conclusion, I’ve been much happier and much more productive on iPhone, iPad, and iPad Pro than I was on Android phones and tablets. Because the devices and apps work so much better, and the experience is so much more pleasurable, I use them a lot more and I get a lot more done. For the way I use them, my iPhone and iPad are huge productivity levers, allowing me to achieve much more in a day’s work than I did previously.

Next time I’ll write about the apps I use on iPhone and iPad for everything from source-code editing to graphic design. I’ve been able to figure out the boundaries of what works and doesn’t, and therefore which tasks I can perform on mobile, and what remains on laptop/desktop devices. Spoiler: I can’t get rid of my MacBook yet. But I wrote, edited, put into Git, and committed and pushed this blog post entirely from iPad.

Photo Credit

Categories: MySQL

MySQL 8.0: Descending Indexes Can Speed Up Your Queries

MySQL Performance Blog - Thu, 2016-10-20 14:04

In this blog, we’ll discuss descending indexes in MySQL 8.0.

Summary

The future MySQL 8.0 will (probably) have a great new feature: support for index sort order on disk (i.e., indexes can be physically sorted in descending order). In the MySQL 8.0 Labs release (new optimizer preview), when you create an index you can specify the order “asc” or “desc”, and it will be supported (for B-Tree indexes). That can be especially helpful for queries like “SELECT … ORDER BY event_date DESC, name ASC LIMIT 10″ (ORDER BY clause with ASC and DESC sort).

MySQL 5.6 and 5.7 Index Order

Actually, the support for this syntax (CREATE INDEX … col_name … [ASC | DESC]) was there for a long time, but it was reserved for future extensions: if you created an index and specify “DESC” keyword was ignored in MySQL 5.6 and 5.7 (an index is always created in ascending order).

At the same time, MySQL (all versions) can scan the index backward, so those two queries will use index:

CREATE TABLE `events` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(100) DEFAULT NULL, `event_date` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `date_name` (`event_date`,`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2490312 DEFAULT CHARSET=latin1 mysql> explain select * from events order by event_date, name limit 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: events partitions: NULL type: index possible_keys: NULL key: date_name key_len: 109 ref: NULL rows: 10 filtered: 100.00 Extra: Using index mysql> explain select * from events order by event_date desc, name desc limit 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: events partitions: NULL type: index possible_keys: NULL key: date_name key_len: 109 ref: NULL rows: 10 filtered: 100.00 Extra: Using index 1 row in set, 1 warning (0.00 sec)

In the second query, MySQL scans the index backward for two fields.

What is also very important here is the “LIMIT 10”. MySQL scans the table in the order of index (and avoids filesort), then it aborts the scan after finding 10 rows. That makes the query almost instant:

mysql> select * from events order by event_date desc, name desc limit 10; +--------+-------+---------------------+ | id | name | event_date | +--------+-------+---------------------+ | 8 | test1 | 2016-10-09 10:01:06 | | 7 | test1 | 2016-10-09 10:01:06 | | 262125 | new2 | 2016-10-09 10:01:06 | | 262124 | new2 | 2016-10-09 10:01:06 | | 262123 | new2 | 2016-10-09 10:01:06 | | 262122 | new2 | 2016-10-09 10:01:06 | | 131053 | new1 | 2016-10-09 10:01:06 | | 131052 | new1 | 2016-10-09 10:01:06 | | 6 | test1 | 2016-10-09 10:01:05 | | 5 | test1 | 2016-10-09 10:01:05 | +--------+-------+---------------------+ 10 rows in set (0.00 sec)

But what about a different order: DESC and ASC (which make sense in the example where we want to show the latest events first but also use the secondary order, alphabetical, by event name). The query does a filesort and performs much slower:

mysql> explain select * from events order by event_date desc, name asc limit 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: events partitions: NULL type: index possible_keys: NULL key: date_name key_len: 109 ref: NULL rows: 2017864 filtered: 100.00 Extra: Using index; Using filesort 1 row in set, 1 warning (0.00 sec) mysql> select * from events order by event_date desc, name asc limit 10; +--------+-------+---------------------+ | id | name | event_date | +--------+-------+---------------------+ | 131053 | new1 | 2016-10-09 10:01:06 | | 131052 | new1 | 2016-10-09 10:01:06 | | 262123 | new2 | 2016-10-09 10:01:06 | | 262122 | new2 | 2016-10-09 10:01:06 | | 262124 | new2 | 2016-10-09 10:01:06 | | 262125 | new2 | 2016-10-09 10:01:06 | | 7 | test1 | 2016-10-09 10:01:06 | | 8 | test1 | 2016-10-09 10:01:06 | | 131055 | new1 | 2016-10-09 10:01:05 | | 131054 | new1 | 2016-10-09 10:01:05 | +--------+-------+---------------------+ 10 rows in set (2.41 sec)

MySQL 8.0 (Labs release)

The MySQL Server 8.0.0 Optimizer labs release includes new support for the index sort order (for InnoDB only). Here is how our query from the above performs:

Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 5 Server version: 8.0.0-labs-opt MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> alter table events add key date_desc_name_asc(event_date desc, name asc); Query OK, 0 rows affected (8.47 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> show create table eventsG *************************** 1. row *************************** Table: events Create Table: CREATE TABLE `events` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(100) DEFAULT NULL, `event_date` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `date_desc_name_asc` (`event_date` DESC,`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2490312 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

I’ve created an index, targeted for our specific query order: event_date descending, name ascending. Now it works much faster:

mysql> explain select * from events order by event_date desc, name asc limit 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: events partitions: NULL type: index possible_keys: NULL key: date_desc_name_asc key_len: 109 ref: NULL rows: 10 filtered: 100.00 Extra: Using index 1 row in set, 1 warning (0.00 sec) mysql> select * from events order by event_date desc, name asc limit 10; +--------+-------+---------------------+ | id | name | event_date | +--------+-------+---------------------+ | 131052 | new1 | 2016-10-09 10:01:06 | | 131053 | new1 | 2016-10-09 10:01:06 | | 262122 | new2 | 2016-10-09 10:01:06 | | 262123 | new2 | 2016-10-09 10:01:06 | | 262124 | new2 | 2016-10-09 10:01:06 | | 262125 | new2 | 2016-10-09 10:01:06 | | 7 | test1 | 2016-10-09 10:01:06 | | 8 | test1 | 2016-10-09 10:01:06 | | 131054 | new1 | 2016-10-09 10:01:05 | | 131055 | new1 | 2016-10-09 10:01:05 | +--------+-------+---------------------+ 10 rows in set (0.00 sec)

The index (event_date desc, name asc) satisfies two conditions:

  • order by event_date desc, name asc: forward index scan
  • order by event_date asc, name desc: backward index scan

mysql> explain select * from events order by event_date asc, name desc limit 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: events partitions: NULL type: index possible_keys: NULL key: date_desc_name_asc key_len: 109 ref: NULL rows: 10 filtered: 100.00 Extra: Using index; Backward index scan 1 row in set, 1 warning (0.00 sec)

Note the “Backward index scan” in the Extra column above.

This is a similar situation to an index on (event_date, name) sorted in ascending order, and can be used to satisfy both event_date asc, name asc and event_date desc, name desc (same order across two fields).

The original query that ran in 2.41 seconds (and performed a filesort operation), now runs almost instantly with the new index:

mysql> select * from events order by event_date desc, name asc limit 10; +--------+-------+---------------------+ | id | name | event_date | +--------+-------+---------------------+ | 131052 | new1 | 2016-10-09 10:01:06 | | 131053 | new1 | 2016-10-09 10:01:06 | | 262122 | new2 | 2016-10-09 10:01:06 | | 262123 | new2 | 2016-10-09 10:01:06 | | 262124 | new2 | 2016-10-09 10:01:06 | | 262125 | new2 | 2016-10-09 10:01:06 | | 7 | test1 | 2016-10-09 10:01:06 | | 8 | test1 | 2016-10-09 10:01:06 | | 131054 | new1 | 2016-10-09 10:01:05 | | 131055 | new1 | 2016-10-09 10:01:05 | +--------+-------+---------------------+ 10 rows in set (0.00 sec)

Please note descending indexes only work for InnoDB:

mysql> create table events_myisam like events; Query OK, 0 rows affected (0.02 sec) mysql> alter table events_myisam engine=MyISAM; ERROR 1178 (42000): The storage engine for the table doesn't support descending indexes

Workaround for MySQL 5.7

A (rather limited) workaround exist for MySQL 5.7, and involves creating (and indexing) a virtual field. Let’s say that instead of varchar, we need to order by an unsigned integer (i.e., “id”, which is an auto_increment field in another table). Our query will look like this: “select * from events order by event_date desc, profile_id asc limit 10”. In this case, we can “invert” the profile_id by making it negative and store it in a “virtual” (aka “generated”) column:

mysql> alter table events_virt add profile_id_negative int GENERATED ALWAYS AS ( -profile_id); | Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 0

Then we can index it together with the date field:

mysql> alter table events_virt add key event_date_profile_id_negative(event_date, profile_id_negative); Query OK, 0 rows affected (7.21 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> show create table events_virtG *************************** 1. row *************************** Table: events_virt Create Table: CREATE TABLE `events_virt` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(100) DEFAULT NULL, `event_date` datetime DEFAULT NULL, `profile_id` int(11) DEFAULT NULL, `profile_id_negative` int(11) GENERATED ALWAYS AS (-(`profile_id`)) VIRTUAL, PRIMARY KEY (`id`), KEY `event_date_profile_id_negative` (`event_date`,`profile_id_negative`) ) ENGINE=InnoDB AUTO_INCREMENT=2424793 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

Now we can use the the profile_id_negative index for “desc” sort order:

mysql> explain select * from events_virt order by event_date desc, profile_id_negative desc limit 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: events_virt partitions: NULL type: index possible_keys: NULL key: event_date_profile_id_negative key_len: 11 ref: NULL rows: 10 filtered: 100.00 Extra: NULL 1 row in set, 1 warning (0.00 sec) mysql> select * from events_virt order by event_date desc, profile_id_negative desc limit 10; +--------+-------+---------------------+------------+---------------------+ | id | name | event_date | profile_id | profile_id_negative | +--------+-------+---------------------+------------+---------------------+ | 7 | test1 | 2016-10-09 10:01:06 | 24 | -24 | | 8 | test1 | 2016-10-09 10:01:06 | 80 | -80 | | 131052 | new1 | 2016-10-09 10:01:06 | 131063 | -131063 | | 131053 | new1 | 2016-10-09 10:01:06 | 131117 | -131117 | | 262125 | new2 | 2016-10-09 10:01:06 | 262130 | -262130 | | 262123 | new2 | 2016-10-09 10:01:06 | 262169 | -262169 | | 262124 | new2 | 2016-10-09 10:01:06 | 262193 | -262193 | | 262122 | new2 | 2016-10-09 10:01:06 | 262210 | -262210 | | 5 | test1 | 2016-10-09 10:01:05 | 80 | -80 | | 6 | test1 | 2016-10-09 10:01:05 | 101 | -101 | +--------+-------+---------------------+------------+---------------------+ 10 rows in set (0.00 sec)

That is much faster, but produces the same results as the following query:

mysql> select * from events_virt order by event_date desc, profile_id asc limit 10; +--------+-------+---------------------+------------+---------------------+ | id | name | event_date | profile_id | profile_id_negative | +--------+-------+---------------------+------------+---------------------+ | 7 | test1 | 2016-10-09 10:01:06 | 24 | -24 | | 8 | test1 | 2016-10-09 10:01:06 | 80 | -80 | | 131052 | new1 | 2016-10-09 10:01:06 | 131063 | -131063 | | 131053 | new1 | 2016-10-09 10:01:06 | 131117 | -131117 | | 262125 | new2 | 2016-10-09 10:01:06 | 262130 | -262130 | | 262123 | new2 | 2016-10-09 10:01:06 | 262169 | -262169 | | 262124 | new2 | 2016-10-09 10:01:06 | 262193 | -262193 | | 262122 | new2 | 2016-10-09 10:01:06 | 262210 | -262210 | | 5 | test1 | 2016-10-09 10:01:05 | 80 | -80 | | 6 | test1 | 2016-10-09 10:01:05 | 101 | -101 | +--------+-------+---------------------+------------+---------------------+ 10 rows in set (2.52 sec)

Conclusion

MySQL 8.0 (Labs release) has a preview of this great new index sort order feature, which can significantly increase the performance of frequently slow query patterns: order by field1 desc, field2 asc limit N. 

This feature can be found in other databases (for example, in MongoDB). It might be that this much-needed feature will be at some point backported into MySQL 5.7, so we can use it in that version.

I want to thank the MySQL Server Development team at Oracle for implementing it. If you are interested in other MySQL optimizer features, take a look at Manyi Lu’s presentation at Percona Live Amsterdam, where she talks about other great MySQL 8.0 features: histograms, invisible indexes, common table expressions and extended JSON support.

Categories: MySQL

A Guide to the Percona Monitoring and Management Demo

MySQL Performance Blog - Wed, 2016-10-19 18:10

As part of the release of Percona Monitoring and Management, we set up an extensive online Percona Monitoring and Management demo at pmmdemo.percona.com. The demo is pretty self-explanatory and easy to explore, but you will probably find it a lot more interesting with some explanation. This post provides some guidance.

The PMM demo has a wide variety of MySQL and MongoDB servers set up. Besides its use as a public demo, it shows how things look with different software versions (what works and what does not). Different versions have different quirks and information available.

If you go to System Overview you will see the dropdown of different system names:

 

For “MySQL,” we’re using fairly self-explanatory names:

  • mysql57r is MySQL 5.7 (which is set up as a replica)
  • mdb101 is MariaDB 10.1
  • pmm-server is the local server running the PMM server component
  • pxc56-1 is Percona XtraDB Cluster (node 1)

The sd-XXXXX entries are MongoDB nodes that we haven’t renamed yet.

To keep things interesting (for me at least), all the MySQL nodes are running different workloads. The workloads are available in the GitHub repository if you want to play with PMM on your host and need workloads to generate graphs.

Percona Server for MySQL 5.7 (ps57) currently has the most complex workload, touching InnoDB, MyISAM and TokuDB storage engines. It also has a special workload that shows how the Query Cache stalls with periodic updates, etc. This populates all the graphs, but also it might make them very crowded for viewing (see the TokuDB Dashboard):

There is a mix of workloads set up for all the storage engines to illustrate the variety that typically exists in real systems: sysbench is running and regularly injecting a small number of transactions to generate a light load, plus a periodic heavy reporting query kicks the box. I have also added a heavy spike of updates once an hour to trigger slave lag. It is very interesting to see how this all plays out if you know what to look for.

For example, we can see that five minutes of a very heavy update load causes a replication lag of up to 15 minutes (see the Replication Dashboard):

Or watch balancing the buffer pool content changes get populated with mostly dirty pages during a short spike of heavy updates (InnoDB Dashboard):

You might notice that the host and interval are constantly reset when switching between dashboards. Grafana was designed this way to keep dashboards independent. A better way to navigate is to use Menu in the top right corner to switch dashboards. It preserves selected hosts and time interval settings. This can be a very helpful tool to drill-down into the details for specific period:

Our demo workload is not as advanced for MongoDB. But you can still get a very useful MongoDB cluster overview:

Or drill down into the performance of specific replica sets or individual nodes:

Another part of the PMM demo you should consider exploring is the Query Analytics component. This component shows you the queries that are causing the most load on the system (together with the load that corresponds to the query), how frequent the query occurs (QPS) and the query response time details:

This view helps you understand which queries might need optimization. Typically, it is a good idea to look at queries generating a lot of load, or queries that have response times that don’t always meet your application response time requirements. The 95 percentile and Max Latency scores show these points, and are visible when you mouse over the latency bar:

Things get really interesting when you click on a query that is a candidate for optimization. The dashboard shows everything you need to know to optimize the query:

You can see a number of per-query metrics:

  • Amount of created IO
  • Number of sent and examined rows
  • How has it changed over time

You also can see overall values. For example, this query spent 73% of its execution time waiting on disk IO, which means it would benefit by either getting a faster disk or more memory.

The EXPLAIN output (as well as CREATE TABLE for the table in question) and information about the table size and index cardinality is typically all the information you need whenever there are some missing indexes or queries that need optimization some other way:

Finally, if you need to know more about hardware (or virtual instances) and server configurations, you can click on the server information icon in the left corner. The settings icon shows the information about the system and MySQL instance (as pt-summary and pt-mysql-summary would provide it):

Much of this information is available as graphs, but it is often more informative to see it this way.

There is much more to explore in the PMM demo, but this blog post is getting way too long! I’ll write about it at another time.

Most software offering a comparable level of features is commercially licensed or deployed as subscription-only, cloud-based software. Percona Monitoring and Management (PMM) software is 100% free and open source.

Take a turn with the demo. Then download it and explore it in your environment!

Categories: MySQL

Three Things to Consider When Thinking About Containers

MySQL Performance Blog - Wed, 2016-10-19 14:07

Containers like Docker and Rocket are getting more popular every day. In my conversations with customers, they consistently ask what containers are and how they can use them in their environment. If you’re as curious as most people, read on. . .

How did this happen?

From what I understand, containers grew out of Google’s (and others’) need for massive horizontal scale. Now, this is hardly a unique problem. At the time there were several different solutions out there that could help deploy and orchestrate the applications and infrastructure necessary to scale — namely virtual machines (VMs) and their orchestration services (like Vmware’s vCenter). At the uber-massive scale that companies like Google were pushing, however, server virtualization had some serious drawbacks. Enter containers. . .

What is a container?

Essentially, the main difference between a container and a virtual machine is the amount of overhead involved in each unit.

A virtual machine is literally the components of an entire server, minus the physical hardware. Simplified, it’s the operating system and all additional components required to run the application. That’s a great way to separate applications for mobility, but it also requires a large amount of redundancy (and overhead).

A container is your application and all of its dependencies that run on top of the operating system, but not the operating system (OS) itself. The OS kernel is shared among all the containers on the system. This approach reduces the CPU, memory and disk overhead that virtual machines introduce by running a separate OS instance on every VM. At single system scale, this is not much of a consideration. But when you are talking about scale in the millions, it can lead to an incredible amount of savings without much of a reduction in functionality.

When you get down to the key differences, it essentially amounts to “at what level the hypervisor runs,” or if that’s too technical, the level at which the two technologies provide abstraction from their underlying components (take a look at the image below). For VMs, it abstracts the hardware from the OS. For containers, it abstracts the OS from the applications that interface with it. This is where the benefit comes from: containers allow applications to have isolation without requiring every application to have an additional copy of the operating system. Operating systems take up a great deal of a computer’s resources, so the ability to share that overhead — while getting isolation — can be extremely efficient. Google reported that they spin up roughly two billion containers per week. At that type of scale, you can see the necessity for efficiency!

Image courtesy of docker.com How do I decide between VMs and containers?

This is the million dollar question. Currently, the two are often used together. If you’re running containers on an AWS EC2 instance, then you are running containers inside a VM. However, the are important differences to note:

Security

Security = VMs

Whenever you are consolidating resource, regardless of whether it’s with VMs or containers, security should be top of mind. This is because the process that controls access to resources presents a single, high-value target for attack. If you can compromise that process, you can potentially gain control of all resources that are using it. Scary, right?

Efficiency

Cost reduction = Containers

As I said before, you can use both. However, containers have great potential to reduce the number of physical servers you rely on or your spend in any cloud environment. Especially now that there are orchestration solutions like Kubernetes and Swarm to help you manage your environment.

The only other thing that you should consider before deciding what to put in a container is the next point. . .

Performance

Performance = Depends

If you have services (applications) that have very spiky workloads or require a great deal of resources, you have to make very careful decisions about what services you pair together. This is true of both VMs and containers. The reason being that there are a finite amount of resources on the physical machine that is supporting the services. The closer that machine gets to 100%, the slower it responds to the service, and the slower your application runs.

The wrap up . . .

The container space is still maturing. There are sure to be many exciting announcements to come. One thing is certain: this isn’t a fad! While you don’t have to go and start “containerizing” everything in your environment, I would suggest that you start conversations regarding where they’re a good fit in your environment. After all, someone much smarter than I said “you’re either planning for success or failure. The only difference is deliberation.”

 

Categories: MySQL

Upgrading to MySQL 5.7? Beware of the new STRICT mode

MySQL Performance Blog - Tue, 2016-10-18 23:45

This blog post discusses the ramifications of STRICT mode in MySQL 5.7.

In short

By default, MySQL 5.7 is much “stricter” than older versions of MySQL. That can make your application fail. To temporarily fix this, change the SQL_MODE to NO_ENGINE_SUBSTITUTION (same as in MySQL 5.6):

mysql> set global SQL_MODE="NO_ENGINE_SUBSTITUTION";

MySQL 5.7, dates and default values

The default SQL_MODE in MySQL 5.7 is:

ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

That makes MySQL operate in “strict” mode for transactional tables.

“Strict mode controls how MySQL handles invalid or missing values in data-change statements such as INSERT or UPDATE. A value can be invalid for several reasons. For example, it might have the wrong data type for the column, or it might be out of range. A value is missing when a new row to be inserted does not contain a value for a non-NULL column that has no explicit DEFAULT clause in its definition. (For a NULL column, NULL is inserted if the value is missing.) Strict mode also affects DDL statements such as CREATE TABLE.”
http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sql-mode-strict

That also brings up an interesting problem with the default value for the date/datetime column. Let’s say we have the following table in MySQL 5.7, and want to insert a row into it:

mysql> CREATE TABLE `events_t` ( -> `id` int(11) NOT NULL AUTO_INCREMENT, -> `event_date` datetime NOT NULL, -> `profile_id` int(11) DEFAULT NULL, -> PRIMARY KEY (`id`), -> KEY `event_date` (`event_date`), -> KEY `profile_id` (`profile_id`) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 -> ; Query OK, 0 rows affected (0.02 sec) mysql> insert into events_t (profile_id) values (1); ERROR 1364 (HY000): Field 'event_date' doesn't have a default value

The event_date does not have a default value, and we are inserting a row without a value for event_date. That causes an error in MySQL 5.7. If we can’t use NULL, we will have to create a default value. In strict mod,e we can’t use “0000-00-00” either:

mysql> alter table events_t change event_date event_date datetime NOT NULL default '0000-00-00 00:00:00'; ERROR 1067 (42000): Invalid default value for 'event_date' mysql> alter table events_t change event_date event_date datetime NOT NULL default '2000-00-00 00:00:00'; ERROR 1067 (42000): Invalid default value for 'event_date'

We have to use a real date:

mysql> alter table events_t change event_date event_date datetime NOT NULL default '2000-01-01 00:00:00'; Query OK, 0 rows affected (0.00 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> insert into events_t (profile_id) values (1); Query OK, 1 row affected (0.00 sec)

Or, a most likely much better approach is to change the application logic to:

  • allow NULLs, or
  • always insert the real dates (i.e. use NOW() function), or
  • change the table field to timestamp and update it automatically if no value has been assigned
Further reading

Read the Morgan Tocker’s article on how to transition to MySQL 5.7, and check the full sql_mode documentation

Categories: MySQL

MySQL Downgrade Caveats

MySQL Performance Blog - Tue, 2016-10-18 20:02

In this blog, we’ll discuss things to watch out for during a MySQL downgrade.

Previously, I wrote the blog MySQL upgrade best practices. Besides upgrading your MySQL version, sometimes you need to downgrade. When it comes to downgrading MySQL, there are two types of downgrade methods supported:

  1. In-Place Downgrade: In this method, you use the existing data directory and replace MySQL binaries, followed by a mysql_upgrade execution. This type of downgrade is supported within the same release series. For example, in-place downgrades are supported when moving from 5.7.12 to 5.7.10.
  2. SQL Dump Downgrade: An SQL dump is another downgrade method, also known as “Logical Downgrade.” This method involves a backup of all database tables by using the mysqldump program. You can also use mydumper for parallel backup to improve backup times, followed by replacing the MySQL binaries and restoring the dump into a downgraded version of MySQL. Then use the mysql_upgrade program to complete the downgrade process. This type of downgrade is supported within same release series, and between different release levels. For example, downgrading from 5.6 to 5.5 or downgrading from 5.7 to 5.6 or 5.5

Before downgrading, you need to be aware of few things that could affect the process. I’ll list few of the important things here. For all the changes affecting the downgrade process, you should check the manual. Also, it’s advisable to check the release notes for the specific version you are downgrading to avoid any surprises.

  • In MySQL 5.6, relay-log.info file contains line count and replication delay values. The values contain a file format that differs from previous versions of MySQL. If you are downgrading below 5.6 in a replication setup, then the older server will not read that file format. To correct this issue on the slaves in question, you need to modify the relay-log.info in an editor to remove the initial line containing the number of lines.
  • Downgrading from 5.6 to older version via the “In-Place” method. MySQL 5.5 can cause some issues due to differences in on-disk format for temporal types. Basically, if tables containing TIME, DATETIME or TIMESTAMP columns were CREATEd or ALTERed on 5.6, you can’t use those with 5.5. The recommended method to downgrade from version 5.6 to 5.5 is the “logical downgrade” method with mysqldump.

You can use the query below to identify the tables and columns that might be affected by this problem. Some of them are system tables in the mysql database, which means MySQL is one of the databases you need to dump/restore.

mysql> SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE DATA_TYPE IN ('TIME','DATETIME','TIMESTAMP') ORDER BY TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME;

In a replication environment, it’s recommended that you upgrade/downgrade your slave(s) first. I tried to downgrade one of slave from Percona Server 5.6 to Percona Server 5.5 in the replication hierarchy via logical downgrade as following:

[root@slave ~]# mysql> show slave statusG *************************** 1. row *************************** *** Make note of replication coordinates ******* [root@slave ~]# mysqldump --single-transaction --routines --all-databases > backup56.sql [root@slave ~]# rpm -qa | grep -i "percona" percona-release-0.1-3.noarch percona-nagios-plugins-1.1.6-1.noarch Percona-Server-shared-56-5.6.32-rel78.0.el6.x86_64 Percona-Server-server-56-5.6.32-rel78.0.el6.x86_64 Percona-Server-client-56-5.6.32-rel78.0.el6.x86_64 [root@slave ~]# /etc/init.d/mysql stop Shutting down MySQL (Percona Server).. SUCCESS! [root@slave ~]# rpm -qa | grep Percona-Server | xargs rpm -e --nodeps [root@slave ~]# mv /var/lib/mysql/ /var/lib/mysql56_old/ [root@slave ~]# yum install Percona-Server-client-55.x86_64 Percona-Server-server-55.x86_64 Percona-Server-shared-55.x86_64 Installed: Percona-Server-client-55.x86_64 0:5.5.51-rel38.1.el6 Percona-Server-server-55.x86_64 0:5.5.51-rel38.1.el6 Percona-Server-shared-55.x86_64 0:5.5.51-rel38.1.el6 Complete! [root@slave ~]# /etc/init.d/mysql start Starting MySQL (Percona Server).. SUCCESS! [root@slave ~]# mysql_upgrade --upgrade-system-tables --skip-write-binlog Looking for 'mysql' as: mysql The --upgrade-system-tables option was used, databases won't be touched. Running 'mysql_fix_privilege_tables'... OK

At this point, Percona Server 5.6 downgraded to Percona Server 5.5. Now, let’s try to restore the backup taken from the 5.6 instance to the 5.5 instance.

[root@slave ~]# mysql < backup56.sql ERROR 1064 (42000) at line 320: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'STATS_PERSISTENT=0' at line 11

After analyzing the results, I found that “STATS_PERSISTENT” was appended in few of the MySQL system tables during backup. It failed during restore because innodb_stats_persistent is not available in MySQL 5.5. I found this reported bug verified the issue.

To remedy this problem, you need to backup the MySQL user grants separately from the application databases: pt-show-grants from Percona Toolkit comes to the rescue!

From the Percona Server 5.6 slave:

[root@slave_5_6 ~]# pt-show-grants --flush > grants.sql [root@slave_5_6 ~]# mysqldump --single-transaction --routines --databases db1 db2 > databases.sql

And to restore on the Percona Server 5.5 slave:

[root@slave_5_5 ~]# mysql < grants.sql [root@slave_5_5 ~]# mysql < databases.sql

At this point, the slave downgraded to Percona Server 5.5 from 5.6 and was restored. When trying to restore a replication after a downgrade, the replication failed with the below error, where the master is using MySQL version 5.6 and the slave is downgraded to version 5.5.

mysql> show slave statusG *************************** 1. row *************************** Slave_IO_State: Master_Host: 10.0.3.131 Master_User: slaveuser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000007 Read_Master_Log_Pos: 4 Relay_Log_File: centos4-relay-bin.000004 Relay_Log_Pos: 151 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: No Slave_SQL_Running: Yes . . Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Slave can not handle replication events with the checksum that master is configured to log; the first event 'master-bin.000006' at 120, the last event read from './master-bin.000007' at 120, the last byte read from './master-bin.000007' at 120.' Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.00 sec)

Remember, replication from a newer major version to an older major version of MySQL (for example a 5.6 master and a 5.5 replica) is not supported, and is generally not recommended. Replication failed because starting in MySQL 5.6.6, the new binlog_checksum option defaults to CRC32. Since that option did not exist in MySQL 5.5, the replica can’t handle the checksums coming from the master. The other way around, the CRC (cyclic redundant checksum) fails because the binlog is in a pre-5.6 format and has no checksum info. You need to set binlog_checksum to NONE on the MySQL master server.

mysql> SET GLOBAL binlog_checksum = NONE;

This will resume the replication on 5.5 slaves from the 5.6 master. I recommend setting binlog_checksum=NONE in my.cnf under the [mysqld] section to make this change persistent across a reboot of the master server 5.6.

To summarize, downgrading through the “logical dump” method requires you to exclude dump/restore the MySQL system database, and dump only user grants with the help of pt-show-grants  to restore the database users and privileges. Also, you need to make binlog_checksum compatible (i.e., NULL) when downgrading from version 5.6 to an older version (e.g., 5.5 or 5.1). Along with that, you need to make sure binlog_rows_query_log_events is enabled and binlog_row_image is set to FULL. Also, when replicating from version 5.6 to 5.5, GTID-based replication is not supported and you need to set gtid_mode=OFF in 5.6. All those variables should be set properly on the 5.6 master. Check the manual for more details.

Conclusion:

A MySQL downgrade is the reversal of an upgrade, and can be painful. It might be necessary if you upgrade without proper testing. One of the reasons for a downgrade is if you notice that your application is malfunctioning, MySQL is crashing or performance is not up to mark after an upgrade.

Categories: MySQL

Percona Server 5.6.33-79.0 is now available

MySQL Performance Blog - Tue, 2016-10-18 15:45

Percona announces the release of Percona Server 5.6.33-79.0 on October 18th, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.6.33, including all the bug fixes in it, Percona Server 5.6.33-79.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.33-79.0 milestone on Launchpad.

New Features:
  • Percona Server has implemented support for per-column VARCHAR/BLOB compression for the XtraDB storage engine. This also features compression dictionary support, to improve compression ratio for relatively short individual rows, such as JSON data.
  • A new TokuDB tokudb_dir_per_db option has been introduced to address two TokuDB shortcomings, the renaming of data files on table/index rename, and the ability to group data files together within a directory that represents a single database. This feature is disabled by default.
Bugs Fixed:
  • After fixing bug #1540338, system table engine validation check is no longer skipped for tables that don’t have an explicit ENGINE clause in a CREATE TABLE statement. If MySQL upgrade statements are replicated, and slave does not have the MyISAM set as a default storage engine, then the CREATE TABLE mysql.server statement would attempt to create an InnoDB table and fail because mysql_system_tables.sql script omitted explicit engine setting for this table. Bug fixed #1600056.
  • Audit Log Plugin malformed record could be written after audit_log_flush was set to ON in ASYNC and PERFORMANCE modes. Bug fixed #1613650.
  • A race condition between HandlerSocket and server shutdown could lead to a server stall if shutdown was issued immediately after HandlerSocket startup. Bug fixed #1617198.
  • HandlerSocket could access freed memory on startup. Bug fixed #1617998.
  • Workloads with statements that take non-transactional locks (LOCK TABLES, global read lock, and similar) could have caused deadlocks when running under Thread Pool with high priority queue enabled and thread_pool_high_prio_mode set to transactions. Fixed by placing such statements into the high priority queue even with the above thread_pool_high_prio_mode setting. Bugs fixed #1619559 and #1374930.
  • Fixed memory leaks in Audit Log Plugin. Bug fixed #1620152 (upstream #71759).
  • Server could crash due to a glibc bug in handling short-lived detached threads. Bug fixed #1621012 (upstream #82886).
  • QUERY_RESPONSE_TIME_READ and QUERY_RESPONSE_TIME_WRITE were returning QUERY_RESPONSE_TIME table data if accessed through a name that is not full uppercase. Bug fixed #1552428.
  • Fixed memory leaks in HandlerSocket. Bug fixed #1617949.
  • KILL QUERY was not behaving consistently and it would hang in some cases. Bug fixed #1621046 (upstream #45679).
  • Cipher ECDHE-RSA-AES128-GCM-SHA256 was listed in the list of supported ciphers but it wasn’t supported. Bug fixed #1622034 (upstream #82935).
  • Successful doublewrite recovery was showed as a warning in the error log. Bug fixed #1622985.
  • Variable query_cache_type couldn’t be set to 0 if it was already set to that value. Bug fixed #1625501 (upstream #69396).
  • LRU manager thread could run too long on a server shutdown, causing a server crash. Bug fixed #1626069.
  • tokudb_default was not recognized by Percona Server as a valid row format. Bug fixed #1626206.
  • InnoDB ANALYZE TABLE didn’t remove its table from the background statistics processing queue. Bug fixed #1626441 (upstream #71761).
  • Upstream merge for #81657 to 5.6 was incorrect. Bug fixed #1626936 (upstream #83124).
  • Fixed multi-threaded slave thread leaks that happened in case of thread create failure. Bug fixed #1619622 (upstream #82980).
  • Shutdown waiting for a purge to complete was undiagnosed for the first minute. Bug fixed #1616785.
  • Unnecessary InnoDB change buffer merge attempts are now skipped upon reading disk pages of non-applying types. Bug fixed #1618393 (upstream #75235).

Other bugs fixed: #1614439, #1614949, #1616392 (upstream #82798), #1624993 (#736), #1613647, #1626002 (upstream #83073), #904714, #1610102, #1610110, #1613663, #1613728, #1613986, #1614885, #1615959, #1615970, #1616333, #1616404, #1616768, #1617150, #1617216, #1617267, #1618478, #1618811, #1618819, #1619547, #1619572, #1620583, #1622449, #1622456, #1622977, #1623011, #1624992 (#1014), #1625176, #1625187, #1626500, #1628417, #964, and #735.

NOTE: If you haven’t already, make sure to add the new Debian/Ubuntu repository signing key.

Release notes for Percona Server 5.6.33-79.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Categories: MySQL

Percona Monitoring and Management 1.0.5 is now available

MySQL Performance Blog - Fri, 2016-10-14 14:15


Percona is glad to announce the release of Percona Monitoring and Management 1.0.5.

PMM Server is distributed through Docker Hub, PMM Client – through tarball or system packages. The instructions for installing or upgrading PMM are available in the documentation.

PMM Server changelog

  • Prometheus 1.1.3
  • Consul 0.7.0
  • Added Orchestrator – a MySQL replication topology management and visualization tool. Available at /orchestrator URL. You can see demo here.
    Note: Orchestrator was included into PMM for experimental purposes. It is a standalone tool, not integrated with PMM other than that you can access it from the landing page. How to use Orchestrator with PMM you can find here.
  • Added ProxySQL metrics and dashboard. You can see demo here.
  • Changed metric storage encoding to achieve less disk space usage by 50-70%.
  • Grafana data is now stored in the data container to preserve your custom dashboards and settings.
    Note: to enable this, create the data container with “-v /var/lib/grafana”.
  • MySQL Query Analytics data is now preserved when you remove and then add a mysql:queries instance with the same name using pmm-admin.
  • Fixed rare issue when Nginx tries to use IPv6 for localhost connections.
  • Improvements and fixes to Query Analytics.
  • Various dashboard improvements.

PMM Client changelog

  • Added check for orphaned local and remote services.
  • Added repair command to remove orphaned services.
  • Added proxysql:metrics service and proxysql_exporter.
  • Amended check-network output.
  • Disabled inital client configuration with a name that is already in use.
  • Changed the threshold for automatically disabling table stats when adding mysql:metrics service to 1000 tables on the server. Table stats were previously automatically disabled only if there were over 10 000 tables. You can still manually disable table stats using “pmm-admin add mysql –disable-tablestats”.
  • Fixes for mysql:queries service:
    • Improved registration and detection of orphaned setup
    • PID file “” is no longer created on Amazon Linux (requires to re-add mysql:queries service)
    • Fixed support for MySQL using a timezone different than UTC
    • Corrected detection of slow log rotation and also perform its own rotation when used as a query source
    • RELOAD privilege is now required to flush the slow log
  • PMM Client can be installed as a package on RedHat EL 5/CentOS 5.

To see live demo, please visit pmmdemo.percona.com.

We welcome your feedback and questions on our PMM forum.

About Percona Monitoring and Management
Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. It is developed by Percona in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

Categories: MySQL

Five Surprising Secrets of People Who Always Win

Xaprb, home of innotop - Fri, 2016-10-14 01:46

Are you a winner? Or are you going to let life get you back on your heels and make you into a failure? Because remember, if you lose, it means you’re a loser. By definition.

Here’s the surprising truth about what winners do and how they do it, and how you can too.

Winners Always Win

You can always identify a winner right away, because they’re the ones who are winning. Why did they win? Because they’re winners. Why are they winners? Because they win.

Winners Have Winning Strategies

If you picked a strategy and it didn’t result in winning, it’s because you picked the wrong strategy, and you’re a failure.

Winners Don’t Let Failure Define Them

You miss all the shots you don’t take. You need to experience failure in a positive way. You only need to win once to be a winner, but if you stop short of winning, you’re a failure. Go ahead. Throw more good money after bad. If you don’t, you’ll never know if this time will be that one time that you finally win. Besides, the best way to learn how to do things is to learn how not to do them by failing at them, just like the best way to live is to not die. On that note, the best way to be a winner is to not be a loser.

Winners Don’t Believe In Privilege, Only In Positive Thinking

You were born to a wealthy family and you went to a great school because you are a winner. If you weren’t born to a wealthy family and you didn’t go to a great school, but you’re a winner, it’s because you are a rags-to-riches success story of hard work and grit. If your hard work and grit didn’t make you a winner, it’s because you’re a loser and you didn’t win hard enough. And don’t let survivor bias and hindsight bias diminish the sweetness (or bitterness) of your victory or loss. Winners know that the Universe itself is forced to reward positive thoughts with winning. It’s the laws of quantum physics, because that’s what winners have decided to believe.

Winners Always Read About Winning Strategies Of Other Winners

Winners quote Good To Great and Built To Last because those are stories written about winners. They read every article about the surprising strategies of winners. And they put ALL of the strategies from ALL the articles into action. They make sure they put in the extra hours but get enough sleep, while keeping a regular routine and shaking things up to avoid being in a rut. They never miss a chance to network, but they’re at home with the family every night. They have unshakeable faith and certitude, and always the right answers, because winners don’t need experience to teach them things.

Aren’t you glad you read all the way to the end of this and now you know how to be a winner? Of course you are! You have read this, therefore you’re a winner. If you’re not, you don’t belong here, move along now!

Note: this article is a parody of the appealing, but substanceless, “success literature” industry and their marketing. My advice: Don’t succumb to their siren song. Be deliberate and examine your plans, actions, and outcomes in search of continuous self-improvement.

Photo Credit

Categories: MySQL

Five Surprising Secrets of People Who Always Win

Xaprb, home of innotop - Fri, 2016-10-14 01:46

Are you a winner? Or are you going to let life get you back on your heels and make you into a failure? Because remember, if you lose, it means you’re a loser. By definition.

Here’s the surprising truth about what winners do and how they do it, and how you can too.

Winners Always Win

You can always identify a winner right away, because they’re the ones who are winning. Why did they win? Because they’re winners. Why are they winners? Because they win.

Winners Have Winning Strategies

If you picked a strategy and it didn’t result in winning, it’s because you picked the wrong strategy, and you’re a failure.

Winners Don’t Let Failure Define Them

You miss all the shots you don’t take. You need to experience failure in a positive way. You only need to win once to be a winner, but if you stop short of winning, you’re a failure. Go ahead. Throw more good money after bad. If you don’t, you’ll never know if this time will be that one time that you finally win. Besides, the best way to learn how to do things is to learn how not to do them by failing at them, just like the best way to live is to not die. On that note, the best way to be a winner is to not be a loser.

Winners Don’t Believe In Privilege, Only In Positive Thinking

You were born to a wealthy family and you went to a great school because you are a winner. If you weren’t born to a wealthy family and you didn’t go to a great school, but you’re a winner, it’s because you are a rags-to-riches success story of hard work and grit. If your hard work and grit didn’t make you a winner, it’s because you’re a loser and you didn’t win hard enough. And don’t let survivor bias and hindsight bias diminish the sweetness (or bitterness) of your victory or loss. Winners know that the Universe itself is forced to reward positive thoughts with winning. It’s the laws of quantum physics, because that’s what winners have decided to believe.

Winners Always Read About Winning Strategies Of Other Winners

Winners quote Good To Great and Built To Last because those are stories written about winners. They read every article about the surprising strategies of winners. And they put ALL of the strategies from ALL the articles into action. They make sure they put in the extra hours but get enough sleep, while keeping a regular routine and shaking things up to avoid being in a rut. They never miss a chance to network, but they’re at home with the family every night. They have unshakeable faith and certitude, and always the right answers, because winners don’t need experience to teach them things.

Aren’t you glad you read all the way to the end of this and now you know how to be a winner? Of course you are! You have read this, therefore you’re a winner. If you’re not, you don’t belong here, move along now!

Note: this article is a parody of the appealing, but substanceless, “success literature” industry and their marketing. My advice: Don’t succumb to their siren song. Be deliberate and examine your plans, actions, and outcomes in search of continuous self-improvement.

Photo Credit

Categories: MySQL

Update the Signing Key for Percona Debian and Ubuntu Packages

MySQL Performance Blog - Thu, 2016-10-13 18:06

In this blog post, we’ll explain how to update the signing key for Percona Debian and Ubuntu packages.

Some of the users might have already noticed following warning on Ubuntu 16.04 (Xenial Xerus):

W: http://repo.percona.com/apt/dists/xenial/InRelease: Signature by key 430BDF5C56E7C94E848EE60C1C4CBDCDCD2EFD2A uses weak digest algorithm (SHA1)

when running apt-get update.

Percona .deb packages are signed with a key that uses an algorithm now considered weak. Starting with the next release, Debian and Ubuntu packages are signed with a new key that uses the much stronger SHA-512 algorithm. All future package release will also contain the new algorithm.

You’ll need to do one of the following in order to use the new key:

  • If you installed the Percona repository package as described here, this package is automatically updated to a new package version (percona-release_0.1-4). This package currently contains both the old and new keys. This helps make the transition easier (until all packages are signed with the new key).
  • Install the new Percona repository package as described in the installation guide.
  • Manually download and add the key from either keys.gnupg.net or keyserver.ubuntu.com by running:
    apt-key adv --keyserver keys.gnupg.net --recv-keys 8507EFA5 or
    apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8507EFA5

It’s important that you add the new key before the next release. Otherwise you’ll see the following warning:

W: GPG error: http://repo.percona.com xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9334A25F8507EFA5

Leave any questions about updating the signing key for Percona Debian and Ubuntu packages in the comments below.

Categories: MySQL

MySQL 5.7 Performance Tuning Immediately After Installation

MySQL Performance Blog - Wed, 2016-10-12 21:52

This blog updates Stephane Combaudon’s blog on MySQL performance tuning, and covers MySQL 5.7 performance tuning immediately after installation.

A few years ago, Stephane Combaudon wrote a blog post on Ten MySQL performance tuning settings after installation that covers the (now) older versions of MySQL: 5.1, 5.5 and 5.6. In this post, I will look into what to tune in MySQL 5.7 (with a focus on InnoDB).

The good news is that MySQL 5.7 has significantly better default values. Morgan Tocker created a page with a complete list of features in MySQL 5.7, and is a great reference point. For example, the following variables are set by default:

In MySQL 5.7, there are only four really important variables that need to be changed. However, there are other InnoDB and global MySQL variables that might need to be tuned for a specific workload and hardware.

To start, add the following settings to my.cnf under the [mysqld] section. You will need to restart MySQL:

[mysqld] # other variables here innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM) innodb_log_file_size = 256M innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0 innodb_flush_method = O_DIRECT

Description:

Variable Value innodb_buffer_pool_size Start with 50% 70% of total RAM. Does not need to be larger than the database size innodb_flush_log_at_trx_commit
  • 1   (Default)
  • 0/2 (more performance, less reliability)
innodb_log_file_size 128M – 2G (does not need to be larger than buffer pool) innodb_flush_method O_DIRECT (avoid double buffering)

 

What is next?

Those are a good starting point for any new installation. There are a number of other variables that can increase MySQL performance for some workloads. Usually, I would setup a MySQL monitoring/graphing tool (for example, the Percona Monitoring and Management platform) and then check the MySQL dashboard to perform further tuning.

What can we tune further based on the graphs?

InnoDB buffer pool size. Look at the graphs:

As we can see, we can probably benefit from increasing the InnoDB buffer pool size a bit to ~10G, as we have RAM available and the number of free pages is small compared to the total buffer pool.

InnoDB log file size. Look at the graph:

As we can see here, InnoDB usually writes 2.26 GB of data per hour, which exceeds the total size of the log files (2G). We can now increase the innodb_log_file_size variable and restart MySQL. Alternatively, use “show engine InnoDB status” to calculate a good InnoDB log file size.

Other variables

There are a number of other InnoDB variables that can be further tuned:

innodb_autoinc_lock_mode

Setting innodb_autoinc_lock_mode =2 (interleaved mode) can remove the need for table-level AUTO-INC lock (and can increase performance when multi-row insert statements are used to insert values into tables with auto_increment primary key). This requires binlog_format=ROW  or MIXED  (and ROW is the default in MySQL 5.7).

innodb_io_capacity and innodb_io_capacity_max

This is a more advanced tuning, and only make sense when you are performing a lot of writes all the time (it does not apply to reads, i.e. SELECTs). If you really need to tune it, the best method is knowing how many IOPS the system can do. For example, if the server has one SSD drive, we can set innodb_io_capacity_max=6000 and innodb_io_capacity=3000 (50% of the max). It is a good idea to run the sysbench or any other benchmark tool to benchmark the disk throughput.

But do we need to worry about this setting? Look at the graph of buffer pool’s “dirty pages“:

In this case, the total amount of dirty pages is high, and it looks like InnoDB can’t keep up with flushing them. If we have a fast disk subsystem (i.e., SSD), we might benefit from increasing innodb_io_capacity and innodb_io_capacity_max.

Conclusion or TL;DR version

The new MySQL 5.7 defaults are much better for general purpose workloads. At the same time, we still need to configure InnoDB variables to take advantages of the amount of RAM on the box. After installation, follow these steps:

  1. Add InnoDB variables to my.cnf (as described above) and restart MySQL
  2. Install a monitoring system, (e.g., Percona Monitoring and Management platform)
  3. Look at the graphs and determine if MySQL needs to be tuned further
Categories: MySQL

Encrypt your –defaults-file

MySQL Performance Blog - Wed, 2016-10-12 17:22
Encrypt your credentials using GPG

This blog post will look how to use encryption to secure your database credentials.

In the recent blog post Use MySQL Shell Securely from Bash, there are some good examples of how you might avoid using a ~/.my.cnf – but you still need to put that password down on disk in the script. MySQL 5.6.6 and later introduced the  –login-path option, which is a handy way to store per-connection entries and keep the credentials in an encrypted format. This is a great improvement, but as shown in Get MySQL Passwords in Plain Text from .mylogin.cnf, it is pretty easy to get that information back out.

Let’s fix this with gpg-agent, mkfifo and a few servings of Bash foo…

If you want to keep prying eyes away from your super secret database credentials, then you really need to encrypt it. Nowadays most people are familiar with GPG (GNU Privacy Guard), but for those of you that aren’t it is a free implementation of the OpenPGP standard that allows you to encrypt and sign your data and communication.

First steps…

Before we can go on to use GPG to encrypt our credentials, we need to get it working. GnuPG comes with almost every *nix operating system, but for this post we’ll be using Ubuntu 16.04 LTS and we’ll presume that it isn’t yet installed.

$ sudo apt-get install gnupg gnupg-agent pinentry-curses

Once the packages are installed, there is a little configuration required to make things simpler. We’ll go with some minimal settings just to get you going. First of all, we’ll create our main key:

$ gpg --gen-key gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (4096) Requested keysize is 4096 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (5y) Key expires at Tue 05 Oct 2021 23:59:00 BST Is this correct? (y/N) y You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form: "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>" Real name: Ceri Williams Email address: notmyrealaddress@somedomain.com Comment: Encrypted credentials for MySQL You selected this USER-ID: "Ceri Williams (Encrypted credentials for MySQL) <notmyrealaddress@somedomain.com>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key.

After typing a password and gaining sufficient entropy you will have your first key! You can show your private keys as follows:

$ gpg --list-secret-keys /home/ceri/.gnupg/secring.gpg ----------------------------- sec 4096R/C38C02B0 2016-10-06 [expires: 2021-10-05] uid Ceri Williams (Encrypted credentials for MySQL) <notmyrealaddress@somedomain.com>

We’ll now create our “gpg.conf” in which to keep a few settings. This sets the key that is used by default when encrypting, enables the gpg-agent and removes the copyright message.

$ cat <<EOF > ~/.gnupg/gpg.conf default-key C38C02B0 use-agent no-greeting EOF

Now we’ll add a few settings for “gpg-agent” and allow the key to be saved for one day to reduce the number of times you need to enter a password. Also, as this post concentrates on command line programs, we’ve enabled the ncurses pinentry to specify the password when requested.

$ cat <<EOF > ~/.gnupg/gpg-agent.conf pinentry-program /usr/bin/pinentry-curses default-cache-ttl 86400 max-cache-ttl 86400 EOF

You can find more information about setting up and using GPG in the GNU Privacy Handbook.

Encrypt your credentials

If all has gone well so far, you should be able to encrypt your first message. Here is a simple example to create armored (ASCII) output for a recipient with key “C38C02B0”:

$ echo hello | gpg -e --armor -r C38C02B0 -----BEGIN PGP MESSAGE----- Version: GnuPG v1 hQIMA/T3pqGixN5nAQ/+IxmmgoHNVY2IXp7OAQUZZtCw0ayZu/rFotsJBiQcNG4W J9JZmG78fgPfyF2FD4oVsXDBW7yDzfDSxCcX7LL9z4p33bzUAYOwofRP9+8qJGq/ qob1SclNN4fdFc/PtI7XKYBFYcHlfFeTIH44w9GEGdZlyfDfej+qGTJX+UHrKTo3 DaE2qpb7GvohEnDPX5WM0Pts3cATi3PcH4C9OZ5dgYizmlPB58R2DZl1ioERy2jE WSIhkZ8ZPW9ezWYDCtFbgFSpgynzYeFRVv1rel8cxZCSYgHOHrUgQM6WdtVFmEjL ONaRiEA9IcXZXDXaeFezKr2F8PJyaVfmheZDdRTdw54e4R6kPunDeWtD2aCJE4EF ztyWLgQZ0wNE8UY0PepSu5p0FAENk08xd9xNMCSiCuwmBAorafaO9Q8EnJjHS/w5 aKLJzNzad+8zKq3zgBxHGj1liHmx873Epz5izsH/lK9Jwy6H5qGVB71XuNuRMzNr ghgHFWNX7Wy8wnBnV6MrenASgtCUY6cGdT7YpPe6pLr8Qj/3QRLdzHDlMi9gGxoS 26emhTi8sIUzQRtQxFKKXyZ43sldtRewHE/k4/ZRXz5N6ST2cSFAcsMyjScS4p2a JvPvHt4xhn8uRhgiauqd7IqCCSWFrAR4J50AdARmVeucWsbRzIJIEnKW4G/XikvS QQFOvcdalGWKMpH+mRBkHRjbOgGpB0GeRbuKzhdDvVT+EhhIOG8DphumgI0yDyTo Ote5sANgTRpr0KunJPgz5pER =HsSu -----END PGP MESSAGE-----

Now that we have GPG working, we can secure our credentials and encrypt them to use later on. One of the default files MySQL reads is “~/.my.cnf”, which is where you can store your user credentials for easy command line access.

$ cat <<EOF | gpg --encrypt --armor -r C38C02B0 -o ~/.my.cnf.asc [client] user = ceri password = mysecretpassword [mysql] skip-auto-rehash prompt = "smysql d> " EOF

There you go, everything is nice and secure! But wait, how can anything use this?

Bash foo brings MySQL data to you

Most MySQL and Percona tools will accept the “–defaults-file” argument, which tells the program where to look to find what configuration to run. This will allow us to use our encrypted config.

The following script carries out the following actions:

  1. Creates a temporary file on disk and then removes it
  2. Creates a FIFO (a socket-like communication channel that requires both ends to be connected)
  3. Decrypts the config to the FIFO in the background
  4. Launches the “mysql” client and reads from the FIFO

#!/bin/bash set -e declare -ra ARGS=( "${@}" ) declare -ri ARGV=${#ARGS[@]} declare -r SEC_MYCNF=$(test -f ${1:-undef} && echo $_ || echo '.my.cnf.asc') declare -r SEC_FIFO=$(mktemp) declare -a PASSTHRU=( "${ARGS[@]}" ) test ${ARGV} -gt 0 && test -f "${ARGS[0]}" && PASSTHRU=( "${ARGS[@]:1}" ) set -u function cleanup { test -e ${SEC_FIFO} && rm -f $_ return $? } function decrypt { set +e $(which gpg) --batch --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 test $? -eq 0 || $(which gpg) --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 set -e } function exec_cmd { local -r cmd=${1} set +u ${cmd} --defaults-file=${SEC_FIFO} "${PASSTHRU[@]}" set -u } trap cleanup EXIT test -e ${SEC_MYCNF} || exit 1 cleanup && mkfifo ${SEC_FIFO} && decrypt & exec_cmd /usr/bin/mysql

You can use this script as you would normally with the “mysql” client, and pass your desired arguments. You can also optionally pass a specific encrypted config as the first argument:

$ ./smysql.sh .my.test.asc Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 56 Server version: 5.7.14-8 Percona Server (GPL), Release '8', Revision '1f84ccd' Copyright (c) 2009-2016 Percona LLC and/or its affiliates Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. smysql (none)>

There we go, MySQL access via an encrypted “–defaults-file” – and as long as your key is unlocked in the agent you do not need to enter the password.

But wait . . . what about all of the other tools that you might want to use? Well, with a slight tweak you can make the script a little fancier and get other tools to use the config, too (tools such as mysqladmin, mysqldump, pt-show-grants, pt-table-checksum, etc.). The key part of the next script is the specification of accepted commands (“ALIASES”) and the use of symbolic links to alias the script:

#!/bin/bash set -e declare -ra ARGS=( "${@}" ) declare -ri ARGV=${#ARGS[@]} declare -rA ALIASES=( [smysql]=mysql [smysqldump]=mysqldump [smysqladmin]=mysqladmin [spt-show-grants]=pt-show-grants [spt-table-checksum]=pt-table-checksum [spt-table-sync]=pt-table-sync [spt-query-digest]=pt-query-digest ) declare -r PROGNAME=$(basename ${0}) declare -r SEC_MYCNF=$(test -f ${1:-undef} && echo $_ || echo '.my.gpg') declare -r SEC_FIFO=$(mktemp) declare -a PASSTHRU=( "${ARGS[@]}" ) test ${ARGV} -gt 0 && test -f "${ARGS[0]}" && PASSTHRU=( "${ARGS[@]:1}" ) set -u function cleanup { test -e ${SEC_FIFO} && rm -f $_ return $? } function decrypt { set +e $(which gpg) --batch --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 test $? -eq 0 || $(which gpg) --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 set -e } function check_cmd { local k local cmd=${1} for k in "${!ALIASES[@]}"; do test "${cmd}" = ${k} && test -x "$(which ${ALIASES[${k}]})" && echo $_ && return 0 done return 1 } function exec_cmd { local -r cmd=${1} set +u ${cmd} --defaults-file=${SEC_FIFO} "${PASSTHRU[@]}" set -u } function usage { local realfn=$(realpath ${0}) cat <<EOS | fold -sw 120 USAGE: $(basename ${0}) enc_file.gpg [--arg=val] use a GPG-encrypted my.cnf (default: ${SEC_MYCNF}) currently supports: ${ALIASES[@]} create a symlink to match the alias (real app prefixed with 's') e.g. sudo ln -s ${realfn} /usr/local/bin/smysql sudo ln -s ${realfn} /usr/local/bin/spt-show-grants EOS } trap cleanup EXIT ERR test -e ${SEC_MYCNF} || { usage; exit 1; } cmd=$(check_cmd ${PROGNAME}) test $? -eq 0 || { echo ${ALIASES[${PROGNAME}]} is not available; exit 3; } cleanup && mkfifo ${SEC_FIFO} && decrypt & exec_cmd ${cmd}

Now we can set up some symlinks so that the script can be called in a way that the correct application is chosen:

$ mkdir -p ~/bin $ mv smysql.sh ~/bin $ ln -s ~/bin/smysql.sh ~/bin/smysql $ ln -s ~/bin/smysql.sh ~/bin/smysqladmin $ ln -s ~/bin/smysql.sh ~/bin/spt-show-grants

Examples

With some symlinks now in place we can try out some of the tools that we have enabled:

$ ~/bin/smysql -Bsse 'select 1' 1 $ ~/bin/smysqladmin proc +----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | +----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+ | 58 | ceri | localhost | | Query | 0 | starting | show processlist | 0 | 0 | +----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+ $ ~/bin/spt-show-grants --only root@localhost | head -n3 -- Grants dumped by pt-show-grants -- Dumped from server Localhost via UNIX socket, MySQL 5.7.14-8 at 2016-10-07 01:01:55 -- Grants for 'root'@'localhost'

Enjoy some added security in your database environment, on your laptop and even on your Raspberry Pi!

Categories: MySQL

Using the loose_ option prefix in my.cnf

MySQL Performance Blog - Tue, 2016-10-11 21:41

In this blog post, I’ll look at how to use the loose_ option prefix in my.cnf in MySQL.

mysqld throws errors at startup – and refuses to start up – if a non-existent options are defined in the my.cnf file.

For example:

2016-10-05 15:56:07 23864 [ERROR] /usr/sbin/mysqld: unknown variable 'bogus_option=1'

The MySQL manual has a solution: use the loose_ prefix option in my.cnf file.

In the prior example, if we specify loose_bogus_option=1 in the my.cnf file, instead of bogus_option=1, mysqld starts successfully and ignores that option.

This is useful in three situations:

  1. Using Percona Server-specific options in a config file that might be used for MySQL Community instances.
  2. Including options from a future version of MySQL.
  3. Setting options for plugins before they’re loaded.
Use Case 1:

We can use this feature to make a my.cnf file that will work on both MySQL Community and Percona Server, but contains options that aren’t implemented in MySQL Community. For example:

slow_query_log=ON long_query_time=0 loose_log_slow_rate_limit = 100

Without the loose_ prefix on log_slow_rate_limit, MySQL Community will throw a fatal error (as log_slow_rate_limit is not implemented in MySQL Community).

Use Case 2:

If you push out the same my.cnf file to multiple versions of MySQL, you can still use features that only exist on newer versions:

loose_super_read_only = ON

Use Case 3:

Another use case is installing the Percona audit log plugin. If you specify the audit_log  options in my.cnf, and then restart the server before running INSTALL PLUGIN, mysqld will fail. If you use the loose_ prefix, mysqld will start up successfull, and the options will be read when you run INSTALL PLUGIN.

loose_audit_log_file = /audit/audit.log loose_audit_log_rotate_on_size = 1073741824 loose_audit_log_rotations = 5 loose_audit_log_format = JSON

This trick also works for options given on the command line, e.g. mysqld --loose-bogus-option.

Categories: MySQL

MySQL 8.0: The end of MyISAM

MySQL Performance Blog - Tue, 2016-10-11 17:54

This blog discusses the gradual end of MyISAM in MySQL.

The story that started 20 years ago is coming to its end. I’m talking about the old MyISAM storage engine that was the only storage provided by MySQL in 1995, and was available in MySQL for 20+ years. Actually, part of my job as a MySQL consultant for 10+ years was to discover MyISAM tables and advise customers how to convert those to InnoDB.

(Check your MySQL installation, you may still have MyISAM tables).

MySQL 5.7 still used MyISAM storage for the system tables in the MySQL schema.

In MySQL 8.0 (DMR version as of writing), the MyISAM storage engine is still available. But in a very limited scope:

  • After introducing the new data dictionary, the MyISAM tables are gone from the system schema (“mysql” db).
  • Working with MyISAM is harder now (and discouraged): you can’t just copy MyISAM tables into a running MySQL server, they will not be discovered (unlike InnoDB, where you can use “ALTER TABLE … IMPORT TABLESPACE”)
  • However, you can create a table engine=MyISAM, and it will work as before

InnoDB implemented all the older, missing features:

Feature MyISAM InnoDB Full Text Indexes yes Since MySQL 5.6 Portable tables (tablespaces) yes Since MySQL 5.6 Spatial Indexes/RTREE (GIS) yes Since MySQL 5.7 Last update for table yes Since MySQL 5.7

(http://dev.mysql.com/worklog/task/?id=6658) Suitable for temp tables yes Since MySQL 5.7

Also complex selects uses InnoDB ondisk temp tables Faster count(*) yes *Faster in MySQL 5.7 but does not store counter

 

So the only MyISAM advantages left are:

  1. Tables will be smaller on disk compared to uncompressed InnoDB tables.
  2. The count(*) is still much faster in MyISAM:

mysql> select count(*) from a_myisam; +----------+ | count(*) | +----------+ | 6291456 | +----------+ 1 row in set (0.00 sec) mysql> select count(*) from a_innodb; +----------+ | count(*) | +----------+ | 6291456 | +----------+ 1 row in set (2.16 sec)

I would not use MyISAM unless there is a specific case, and for well-known reasons (MyISAM are non-transactional, table level locks, with no crash recovery, etc.)

My colleague Laurynas Biveinis also suggested converting MyISAM to an optional storage engine plugin.

Categories: MySQL

Take Percona’s One-Click 2017 Top Database Concerns Poll

MySQL Performance Blog - Mon, 2016-10-10 23:23

Take Percona’s One-Click 2017 Top Database Concerns Poll.

With 2017 coming quick around the corner, it’s time to start thinking about what next year is going to bring to the open source database community. We just finished Percona Live Europe 2017, and at the conference we looked at new technologies, new techniques, and new ideas. With all this change comes some uncertainty and concern regarding change: how is a changing ecosystem going to affect your database environment? Are you up to speed on the latest technologies and how they can impact your world?

Are your biggest concerns for 2017 scalability, performance, security, monitoring, updates and bugs, or staffing issues? What do you think is going to be your biggest issue in the coming year??

 

Please take a few seconds and answer the following poll. It will help the community get an idea of how the new year could impact their critical database environments.

If you’ve faced specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

You can see the results of our last blog poll on security here.

Categories: MySQL
Syndicate content