MySQL

How to Save and Load Docker Images to Offline Servers

MySQL Performance Blog - 19 hours 49 min ago

,In this post, we’ll see how to make Docker images available to servers that don’t have access to the Internet (i.e., machines where docker pull <image_name> does not work).

As a specific example, we will do this with the latest Percona Monitoring and Management Docker images, since we had requests for this from users and customers. With the following steps, you’ll be able to deploy PMM within your secure network, without access to the Internet. Additionally, the same steps can be used when you need to upgrade the containers’ version in future releases.

There are two ways in which we can do this:

  • the easy way, by using docker save and docker load, or
  • the not-so-easy way, by setting up our own registry

We’ll focus on the first option, since the latter is a bit more convoluted. If you need your own registry, you are probably looking into something else rather than simply avoiding a firewall to pull one image to a server. Check out the Docker online docs in case option two fits your needs better.

As of this writing, 1.1.3 is the latest PMM version, so this is what we’ll use in the example. An image name is comprised of three parts, namely:

  • user_account/ (note the ‘/’ at the end); or empty string (and no ‘/’) for the official Docker repo
  • image_name
  • :tag (note the ‘:’ at the beginning)

The PMM Docker images have the following syntax: percona/pmm-server:1.1.3, but you can change this in the following examples to whatever image name you want, and it will work just the same. Before moving on to the commands needed, let’s imagine that serverA is the machine that has access to the Internet and serverB is the machine behind the firewall.

The steps are simple enough. On serverA, get the image, and save it to a file:

serverA> docker pull percona/pmm-server:1.1.3 1.1.3: Pulling from percona/pmm-server 45a2e645736c: Pull complete 7a3c6f252004: Pull complete 2cc1d8878ff1: Pull complete 6c49ea4e9955: Pull complete bc4630d3a194: Pull complete 75f0952c00bd: Pull complete 79d583a1689c: Pull complete 5a820193ac79: Pull complete 927a0614b164: Pull complete Digest: sha256:5310b23066d00be418a7522c957b2da4155a63c3e7b08663327aef075674bc2e Status: Downloaded newer image for percona/pmm-server:1.1.3 serverA> docker save percona/pmm-server:1.1.3 > ~/pmm-server_1.1.3.tar

Now, all you need to do is move the generated tar file to serverB (by using “scp” or any other means), and execute the following:

serverB> docker load < ~/pmm-server_1.1.3.tar serverB> docker images REPOSITORY           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE percona/pmm-server   1.1.3               acc9af2459a4        3 weeks ago         1.146 GB

Now you’ll be able to use the image as if you had used docker pull percona/pmm-server:1.1.3​:

serverB> docker create ... percona/pmm-server:1.1.3 /bin/true 301a9e89ee95886f497482038aa6601d6cb2e21c0532e1077fa44213ef597f38 serverB> docker run -d ... percona/pmm-server:1.1.3 dbaffa80f62bc0b80239b922bbc746d828fbbeb212a638cfafea92b827141abb serverB> curl http://localhost | grep "Percona Monitoring and Management" ...                    <p>Percona Monitoring and Management (PMM) is a free and open-source solution for managing and monitoring performance on MySQL and MongoDB, and provides time-based analysis of performance to ensure that your data works as efficiently as possible.</p> ...

Lastly, let me add the relevant documentation links, so you have them at hand, if needed:

https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/docker.html

https://docs.docker.com/engine/reference/commandline/save/

https://docs.docker.com/engine/reference/commandline/load/

Categories: MySQL

ICP Counters in information_schema.INNODB_METRICS

MySQL Performance Blog - Mon, 2017-05-22 20:53

In this blog, we’ll look at ICP counters in the information_schema.INNODB_METRICS. This is part two of the Index Condition Pushdown (ICP) counters blog post series. 

As mentioned in the previous post, in this blog we will look at how to check on ICP counters on MySQL and Percona Server for MySQL. This also applies to MariaDB, since the INNODB_METRICS table is also available for MariaDB (as opposed to the Handler_icp_% counters being MariaDB-specific). We will use the same table and data set as in the previous post.

For simplicity we’ll show the examples on MySQL 5.7.18, but they also apply to the latest Percona Server for MySQL (5.7.18) and MariaDB Server (10.2.5):

mysql [localhost] {msandbox} (test) > SELECT @@version, @@version_comment; +-----------+------------------------------+ | @@version | @@version_comment            | +-----------+------------------------------+ | 5.7.18    | MySQL Community Server (GPL) | +-----------+------------------------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SHOW CREATE TABLE t1G *************************** 1. row ***************************       Table: t1 Create Table: CREATE TABLE `t1` (  `f1` int(11) DEFAULT NULL,  `f2` int(11) DEFAULT NULL,  `f3` int(11) DEFAULT NULL,  KEY `idx_f1_f2` (`f1`,`f2`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT COUNT(*) FROM t1; +----------+ | COUNT(*) | +----------+ |  3999996 | +----------+ 1 row in set (3.98 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 LIMIT 12; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    1 |    2 |    1 | |    1 |    3 |    1 | |    1 |    4 |    1 | |    2 |    1 |    1 | |    2 |    2 |    1 | |    2 |    3 |    1 | |    2 |    4 |    1 | |    3 |    1 |    1 | |    3 |    2 |    1 | |    3 |    3 |    1 | |    3 |    4 |    1 | +------+------+------+ 12 rows in set (0.00 sec)

Before proceeding with the examples, let’s see what counters we have available and how to enable and query them. The documentation page is at the following link: https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-metrics-table.html.

The first thing to notice is that we are advised to check the validity of the counters for each version where we want to use them. The counters represented in the INNODB_METRICS table are subject to change, so for the most up-to-date list it’s best to query the running MySQL server:

mysql [localhost] {msandbox} (test) > SELECT NAME, SUBSYSTEM, STATUS FROM information_schema.INNODB_METRICS WHERE NAME LIKE '%icp%'; +------------------+-----------+----------+ | NAME             | SUBSYSTEM | STATUS   | +------------------+-----------+----------+ | icp_attempts     | icp       | disabled | | icp_no_match     | icp       | disabled | | icp_out_of_range | icp       | disabled | | icp_match        | icp       | disabled | +------------------+-----------+----------+ 4 rows in set (0.00 sec)

Looking good! We have all the counters we expected, which are:

  • icp_attempts: the number of rows where ICP was evaluated
  • icp_no_match: the number of rows that did not completely match the pushed WHERE conditions
  • icp_out_of_range: the number of rows that were checked that were not in a valid scanning range
  • icp_match: the number of rows that completely matched the pushed WHERE conditions

This link to the code can be used for reference: https://github.com/mysql/mysql-server/blob/5.7/include/my_icp.h.

After checking which counters we have at our disposal, you need to enable them (they are not enabled by default). For this, we can use the “modules” provided by MySQL to group similar counters for ease of use. This is also explained in detail in the documentation link above, under the “Counter Modules” section. INNODB_METRICS counters are quite inexpensive to maintain, as you can see in this post by Peter Z.

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_enable = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, SUBSYSTEM, STATUS FROM information_schema.INNODB_METRICS WHERE NAME LIKE '%icp%'; +------------------+-----------+---------+ | NAME             | SUBSYSTEM | STATUS  | +------------------+-----------+---------+ | icp_attempts     | icp       | enabled | | icp_no_match     | icp       | enabled | | icp_out_of_range | icp       | enabled | | icp_match        | icp       | enabled | +------------------+-----------+---------+ 4 rows in set (0.00 sec)

Perfect, we now know what counters we need, and how to enable them. We just need to know how to query them, and we can move on to the examples. However, before rushing into saying that a simple SELECT against the INNODB_METRICS table will do, let’s step back a bit and see what columns we have available that can be of use:

mysql [localhost] {msandbox} (test) > DESCRIBE information_schema.INNODB_METRICS; +-----------------+--------------+------+-----+---------+-------+ | Field           | Type         | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+-------+ | NAME            | varchar(193) | NO   |     |         |       | | SUBSYSTEM       | varchar(193) | NO   |     |         |       | | COUNT           | bigint(21)   | NO   |     | 0       |       | | MAX_COUNT       | bigint(21)   | YES  |     | NULL    |       | | MIN_COUNT       | bigint(21)   | YES  |     | NULL    |       | | AVG_COUNT       | double       | YES  |     | NULL    |       | | COUNT_RESET     | bigint(21)   | NO   |     | 0       |       | | MAX_COUNT_RESET | bigint(21)   | YES  |     | NULL    |       | | MIN_COUNT_RESET | bigint(21)   | YES  |     | NULL    |       | | AVG_COUNT_RESET | double       | YES  |     | NULL    |       | | TIME_ENABLED    | datetime     | YES  |     | NULL    |       | | TIME_DISABLED   | datetime     | YES  |     | NULL    |       | | TIME_ELAPSED    | bigint(21)   | YES  |     | NULL    |       | | TIME_RESET      | datetime     | YES  |     | NULL    |       | | STATUS          | varchar(193) | NO   |     |         |       | | TYPE            | varchar(193) | NO   |     |         |       | | COMMENT         | varchar(193) | NO   |     |         |       | +-----------------+--------------+------+-----+---------+-------+ 17 rows in set (0.00 sec)

There are two types: %COUNT and %COUNT_RESET. The former counts since the corresponding counters were enabled, and the latter since they were last reset (we have the TIME_% columns to check when any of these were done). This is why in our examples we are going to check the %COUNT_RESET counters, so we can reset them before running each query (as we did with FLUSH STATUS in the previous post).

Without further ado, let’s check how this all works together:

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE f1 < 3 AND (f2 % 4) = 1; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    2 |    1 |    1 | +------+------+------+ 2 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp%'; +------------------+-------------+ | NAME             | COUNT_RESET | +------------------+-------------+ | icp_attempts     |           9 | | icp_no_match     |           6 | | icp_out_of_range |           1 | icp_match        |           2 | +------------------+-------------+ 4 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > EXPLAIN SELECT * FROM t1 WHERE f1 < 3 AND (f2 % 4) = 1; +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ | id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows | filtered | Extra                 | +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ |  1 | SIMPLE      | t1    | NULL       | range | idx_f1_f2     | idx_f1_f2 | 5       | NULL |    8 |   100.00 | Using index condition | +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ 1 row in set, 1 warning (0.00 sec)

If you checked the GitHub link above, you might have noted that the header file only contains three of the counters. This is because icp_attempts is computed as the sum of the rest. As expected, icp_match equals the number of returned rows, which makes sense. icp_no_match should also make sense if we check the amount of rows present without the WHERE conditions on f2.

mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE f1 < 3; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    1 |    2 |    1 | |    1 |    3 |    1 | |    1 |    4 |    1 | |    2 |    1 |    1 | |    2 |    2 |    1 | |    2 |    3 |    1 | |    2 |    4 |    1 | +------+------+------+ 8 rows in set (0.00 sec)

So, 8 – 2 = 6, which is exactly icp_no_match‘s value. Finally, we are left with icp_out_of_range. For each end of range the ICP scan detects, this counter is incremented by one. We only scanned one range in the previous query, so let’s try something more interesting (scanning three ranges):

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE ((f1 < 2) OR (f1 > 4 AND f1 < 6) OR (f1 > 8 AND f1 < 12)) AND (f2 % 4) = 1; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    5 |    1 |    1 | |    9 |    1 |    1 | |   10 |    1 |    1 | |   11 |    1 |    1 | +------+------+------+ 5 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp%'; +------------------+-------------+ | NAME             | COUNT_RESET | +------------------+-------------+ | icp_attempts     |          23 | | icp_no_match     |          15 | | icp_out_of_range |           3 | | icp_match        |           5 | +------------------+-------------+ 4 rows in set (0.01 sec)

We have now scanned three ranges on f1, namely: (f1 < 2), (4 < f1 < 6) and (8 < f1 < 12). This is correctly reflected in the corresponding counter. Remember that the MariaDB Handler_icp_attempts status counter we looked at in the previous post does not take into account the out-of-range counts. This means the two “attempts” counters will not be the same!

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; SET GLOBAL innodb_monitor_reset = dml_reads; FLUSH STATUS; ... mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE ((f1 < 2) OR (f1 > 4 AND f1 < 6) OR (f1 > 8 AND f1 < 12)) AND (f2 % 4) = 1; ... 5 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp_attempts'; +--------------+-------------+ | NAME         | COUNT_RESET | +--------------+-------------+ | icp_attempts |          23 | +--------------+-------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SHOW STATUS LIKE 'Handler_icp_attempts'; +----------------------+-------+ | Variable_name        | Value | +----------------------+-------+ | Handler_icp_attempts | 20    | +----------------------+-------+ 1 row in set (0.00 sec)

It can be a bit confusing to have two counters that supposedly measure the same counts yielding different values, so watch this if you use MariaDB.

ICP Counters in PMM

Today you can find an ICP counters graph for MariaDB (Handler_icp_attempts) in PMM 1.1.3.

Additionally, in release 1.1.4 you’ll find graphs for ICP metrics from information_schema.INNODB_METRICS: just look for the INNODB_METRICS-based graph on the InnoDB Metrics dashboard!

I hope you found this blog post series useful! Let me know if you have any questions or comments below.

Categories: MySQL

Webinar May 23, 2017: MongoDB Monitoring and Performance for the Savvy DBA

MySQL Performance Blog - Mon, 2017-05-22 16:54

Join Percona’s Senior Technical Services Engineer Bimal Kharel on Tuesday, May 23, 2017, as he presents a webinar on MongoDB monitoring called How to Help Your DBA’s Sleep Better at Night at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now

Are you trying to stay on top of your database before things turn ugly? Between metrics for throughput, database performance, resource utilization, resource saturation, errors (asserts) and many others, how do you know which one needs to be looked at NOW (and which can wait)?

Both DBAs and system admins must stay on top of the systems they manage. But filtering between metrics that need immediate attention and those that should be watched over time is challenging. In this webinar, Bimal narrows down the list of metrics that help you decide whether the on-call DBA gets their recommended eight hours of shuteye, or gets to run on caffeine with no sleep.

Bimal also discusses which graphs relate to each other, with examples from Percona’s Monitoring and Management (PMM) tool, to help you understand how things in MongoDB can impact other areas.

Please register for the webinar here.

Bimal Kharel, Senior Technical Services Engineer, Percona

Bimal is a MongoDB support engineer at Percona. Before Percona he worked as a MongoDB DBA at EA and Charles Schwab. He has been in various roles throughout his career, from graphics to web developer to systems administration. MongoDB was the first database Bimal got into (he used MySQL for some websites but never other relational databases).

Categories: MySQL

Percona Toolkit 3.0.3 is Now Available

MySQL Performance Blog - Fri, 2017-05-19 16:23

Percona announces the release of Percona Toolkit 3.0.3 on May 19, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features

  • Added the --skip-check-slave-lag option for pt-table-checksum, pt-online-schema-change, and pt-archiverdp.This option can be used to specify a list of servers where to skip checking for slave lag.
  • 1642754: Added support for collecting replication slave information in pt-stalk.
  • PT-111: Added support for collecting information about variables from Performance Schema in pt-stalk. For more information, see 1642753.
  • PT-116: Added the --[no]use-insert-ignore option for pt-online-schema-change to force or prevent using IGNORE on INSERT statements. For more information, see 1545129.

Bug Fixes

  • PT-115: Fixed OptionParser to accept repeatable DSNs.
  • PT-126: Fixed pt-online-schema-change to correctly parse comments. For more information, see 1592072.
  • PT-128: Fixed pt-stalk to include memory usage information. For more information, see 1510809.
  • PT-130: Fixed pt-mext to work with non-empty RSA public key. For more information, see 1587404.
  • PT-132: Fixed pt-online-schema-change to enable --no-drop-new-table when --no-swap-tables and --no-drop-triggers are used.

You can find release details in the release notes. Report bugs in Toolkit’s launchpad bug tracker.

Categories: MySQL

MongoDB Authentication and Roles: Creating Your First Personalized Role

MySQL Performance Blog - Wed, 2017-05-17 18:53

In this blog post, we’ll walk through the native MongoDB authentication and roles, and learn how to create personalized roles. It is a continuation of Securing MongoDB instances.

As said before, MongoDB features a few authentication methods and built-in roles that offer great control of both who is connecting to the database and what they are allowed to do. However, some companies have their own security policies that are often not covered by default roles. This blog post explains not only how to create personalized roles, but also how to grant minimum access to a user.

Authentication Methods

SCRAM-SHA-1 and MONGODB-CR are challenge-response protocols. All the users and passwords are saved encrypted in the MongoDB instance. Challenge-response authentication methods are widely used on the internet in several server-client software. These authentication methods do not send passwords as plain text to the server when the client is starting an authentication. Each new session has a different hash/code, which stops people from getting the password when sniffing the network.

The MONGODB-CR method was deprecated in version 3.0.

The x.509 authentication is an internal authentication that allows instances and clients to communicate to each other. All certificates are signed by the same Certificate Authority and must be valid. All the network traffic is encrypted by a given key, and it is only possible to read data with a valid certificate signed by such key.

MongoDB also offers external authentications such as LDAP and Kerberos. When using LDAP, users can log in to MongoDB using their centralized passwords. The LDAP application is commonly used to manage users and passwords in wide networks. Kerberos is a service that allows users to login only once, and then generates access tickets so that the users are allowed to access other services. Some configuration is necessary to use external authentication.

Built in roles

  • read: collStats,dbHash,dbStats,find,killCursors,listIndexes,listCollections,
  • readWrite: all read privileges + convertToCapped, createCollection,dbStats, dropCollection, createIndex, dropIndex, emptycapped, insert, listIndexes,remove, renameCollectionSameDB, update.
  • readAnyDatabase: allows the user to perform read in any database except the local and the config databases.

And so on…

In this tutorial, we are going to give specific privileges to a user who is allowed to only read the database, although he is allowed to write in a specific collection.

For this tutorial, we are using MongoDB 3.4 with previously configured authentication.

Steps:

  1. Create the database:
    mongo --authenticationDatbase admin -u superAdmin -p use percona db.foo.insert({x : 1}) db.foo2.insert({x : 1})
  2. Create a new user:
    > db.createUser({user : 'client_read', pwd : '123', roles : ['read']}) Successfully added user: { "user" : "client_read", "roles" : [ "read" ] }
  3. Log in with the user that has just been created and check the user access:
    ./mongo localhost/percona -u client_read -p MongoDB shell version v3.4.0-rc5 Enter password: db.foo.find() { "_id" : ObjectId("586bc2e9cac0bbb93f325d11"), "x" : 1 } db.foo2.find().count() 1 // If user try to insert documents will receive an error: > db.foo.insert({x : 2}) WriteResult({             "writeError" : {             "code" : 13,             "errmsg" : "not authorized on percona to execute command                  { insert: "foo", documents: [ { _id: ObjectId('586bc36e7b114fb2517462f3'), x: 2.0 } ], ordered: true }"             } })
  4. Log out and log in again with administrator user to create a new role for this user:
    mongo --authenticationDatabase admin -u superAdmin -p db.createRole({ role : 'write_foo2_Collection', privileges : [ {resource : {db : "percona", collection : "foo2"}, actions : ["insert","remove"]} ], roles : ["read"] }) db.updateUser('client_read', roles : ['write_foo2_Collection'])
  5. Check the new access:
    ./mongo db.auth('client_read','123') 1 > show collections foo foo2 > db.foo.find() { "_id" : ObjectId("586bc2e9cac0bbb93f325d11"), "x" : 1 } > db.foo2.insert({y : 2}) WriteResult({ "nInserted" : 1 }) > db.foo.insert({y : 2}) //does not have permission. WriteResult({       "writeError" : {             "code" : 13,             "errmsg" : "not authorized on percona to execute command { insert: "foo", documents: [ { _id: ObjectId('586bc5e26f05b3a5db849359'), y: 2.0 } ], ordered: true }"                      } })
  6. We can also add access to other database resources. Let’s suppose we would like to grant this just created user permission to execute a getLog command. This command is available in the clusterAdmin role, but we do not want to give all this role’s access to him. See https://docs.mongodb.com/v3.0/reference/privilege-actions/#authr.getLog.

    There is a caveat/detail/observation here. If we want to grant cluster privileges to a user, we should create the role in the admin database. Otherwise, the command will fail:

    db.grantPrivilegesToRole(      "write_foo2_Collection",            [                   {resource : {cluster : true}, actions : ["getLog"] }            ] ) Roles on the 'percona' database cannot be granted privileges that target other databases or the cluster :

  7. We are creating the same role in the admin database. This user only works properly if the admin database is present in a possible restore. Otherwise, the privileges fail:

    use admin db.createRole({      role : 'write_foo2_Collection_getLogs',      privileges : [                        {resource : {db : "percona", collection : "foo2"}, actions : ["insert","remove"]},                        {resource : {cluster : true}, actions : ["getLog"]}],        roles : [ {role : "read", db: "percona"}] }) use percona db.updateUser( "client_read", {      roles : [           { role : "write_foo2_Collection_getLogs", db : "admin" }                 ] } )

  8. Now the user has the same privileges as before, plus the getLog permission. We can test this user new access with:

    mongo --authenticationDatabase percona -u read_user -p db.adminCommand({getLog : 'global'}) {           "totalLinesWritten" : 287,           "log" : [.... …. }

I hope you find this post useful. Please feel free to ping me on twitter @AdamoTonete or @percona and let us know your thoughts.

Categories: MySQL

Percona Live Open Source Database Conference 2017 Slides and Videos Available

MySQL Performance Blog - Tue, 2017-05-16 22:13

The slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.

To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.

To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.

There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.

Some examples of videos and slide decks from the Percona Live conference:

MongoDB 101: Efficient CRUD Queries in MongoDB
Adamo Tonete, Senior Technical Engineer, Percona
Video: https://www.percona.com/live/17/content/efficient-crud-queries-mongodb
Slides: https://www.percona.com/live/17/sessions/efficient-crud-queries-mongodb

MySQL 101: Choosing a MySQL High Availability Solution
Marcos Albe, Principal Technical Services Engineer, Percona
Video: https://www.percona.com/live/17/content/choosing-mysql-high-availability-solution
Slides: https://www.percona.com/live/17/sessions/choosing-mysql-high-availability-solution

Breakout Session: Using the MySQL Document Store
Mike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: https://www.percona.com/live/17/content/using-mysql-document-store
Slides: https://www.percona.com/live/17/sessions/using-mysql-document-store

Keynote: Continuent is Back! But What Does Continuent Do Anyway?
Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: https://www.percona.com/live/17/content/continuent-back-what-does-continuent-do-anyway
Slides: https://www.percona.com/live/17/sessions/continuent-back-what-does-continuent-do-anyway

Please let us know if you have any issues. Enjoy the videos!

Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

We look forward to seeing you there!

Categories: MySQL

Percona Server for MongoDB 3.2.13-3.3 is Now Available

MySQL Performance Blog - Mon, 2017-05-15 17:55

Percona announces the release of Percona Server for MongoDB 3.2.13-3.3 on May 15, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: We deprecated the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.13 and includes the following additional changes:

  • #PSMDB-127: Fixed cleanup of deleted documents and indexes for MongoRocks. When you upgrade to this release, deferred compaction may occur and cause database size to decrease significantly.
  • #PSMDB-133: Added the wiredTigerCheckpointSizeMB variable, set to 1000 in the configuration template for WiredTiger. Valid values are 32 to 2048 (2GB), with the latter being default.
  • #PSMDB-138: Implemented SERVER-23418 for MongoRocks.

Percona Server for MongoDB 3.2.13-3.3 release notes are available in the official documentation.

Categories: MySQL

Percona Server for MySQL 5.7.18-14 is Now Available

MySQL Performance Blog - Fri, 2017-05-12 18:00

Percona announces the GA release of Percona Server for MySQL 5.7.18-14 on May 12, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-14 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-14 milestone at Launchpad.

New Features: Bugs Fixed:
  • Deadlock could occur in I/O-bound workloads when server was using several small buffer pool instances in combination with small redo log files and variable innodb_empty_free_list_algorithm set to backoff algorithm. Bug fixed #1651657.
  • Fixed a memory leak in Percona TokuBackup. Bug fixed #1669005.
  • Compressed columns with dictionaries could not be added to a partitioned table by using ALTER TABLE. Bug fixed #1671492.
  • Fixed a memory leak that happened in case of failure to create a multi-threaded slave worker thread. Bug fixed #1675716.
  • In-place upgrade from Percona Server 5.6 to 5.7 by using standalone packages would fail if /var/lib/mysql wasn’t defined as the datadir. Bug fixed #1687276.
  • Combination of using any audit API-using plugin, like Audit Log Plugin and Response Time Distribution, with multi-byte collation connection and PREPARE statement with a parse error could lead to a server crash. Bug fixed #1688698 (upstream #86209).
  • Fix for a #1433432 bug caused a performance regression due to suboptimal LRU manager thread flushing heuristics. Bug fixed #1631309.
  • Creating Compressed columns with dictionaries in MyISAM tables by specifying partition engines would not result in error. Bug fixed #1631954.
  • It was not possible to configure basedir as a symlink. Bug fixed #1639735.
  • Replication slave did not report Seconds_Behind_Master correctly when running in multi-threaded slave mode. Bug fixed #1654091 (upstream #84415).
  • DROP TEMPORARY TABLE would create a transaction in binary log on a read-only server. Bug fixed #1668602 (upstream #85258).
  • Processing GTIDs in the relay log that were already been executed were causing write/fsync amplification. Bug fixed #1669928 (upstream #85141).
  • Text/BLOB fields were not handling sorting of the empty string consistently between InnoDB and filesort. Bug fixed #1674867 (upstream #81810) by porting a Facebook patch for MySQL.
  • InnoDB adaptive hash index was using a partitioning algorithm which would produce uneven distribution when the server contained many tables with an identical schema. Bug fixed #1679155 (upstream #81814).
  • For plugin variables that are signed numbers, doing a SHOW VARIABLES would always show an unsigned number. Fixed by porting a Facebook patch for MySQL.

Other bugs fixed: #1629250 (upstream #83245), #1660828 (upstream #84786), #1664519 (upstream #84940), #1674299, #1670588 (upstream #84173), #1672389, #1674507, #1675623, #1650294, #1659224, #1662908, #1669002, #1671473, #1673800, #1674284, #1676441, #1676705, #1676847 (upstream #85671), #1677130 (upstream #85678), #1677162, #1677943, #1678692, #1680510 (upstream #85838), #1683993, #1684012, #1684078, #1684264, #1687386, #1687432, #1687600, and #1674281.

The release notes for Percona Server for MySQL 5.7.18-14 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Categories: MySQL

Percona Server for MySQL 5.6.36-82.0 is Now Available

MySQL Performance Blog - Fri, 2017-05-12 17:43

Percona announces the release of Percona Server for MySQL 5.6.36-82.0 on May 12, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.36, and including all the bug fixes in it, Percona Server for MySQL 5.6.36-82.0 is the current GA release in the Percona Server for MySQL 5.6 series. Percona Server for MySQL is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.36-82.0 milestone on Launchpad.

New Features: Bugs Fixed:
  • Deadlock could occur in I/O-bound workloads when server was using several small buffer pool instances in combination with small redo log files and variable innodb_empty_free_list_algorithm set to backoff algorithm. Bug fixed #1651657.
  • Querying TABLE_STATISTICS in combination with a stored function could lead to a server crash. Bug fixed #1659992.
  • tokubackup_slave_info file was created for a master server after taking the backup with Percona TokuBackup. Bug fixed #135.
  • Fixed a memory leak in Percona TokuBackup. Bug fixed #1669005.
  • Compressed columns with dictionaries could not be added to a partitioned table by using ALTER TABLE. Bug fixed #1671492.
  • Fixed a memory leak that happened in case of failure to create a multi-threaded slave worker thread. Bug fixed #1675716.
  • The combination of using any audit API-using plugin, like Audit Log Plugin and Response Time Distribution, with multi-byte collation connection and PREPARE statement with a parse error could lead to a server crash. Bug fixed #1688698 (upstream #86209).
  • Fix for a #1433432 bug in Percona Server 5.6.28-76.1 caused a performance regression due to suboptimal LRU manager thread flushing heuristics. Bug fixed #1631309.
  • Creating Compressed columns with dictionaries in MyISAM tables by specifying partition engines would not result in error. Bug fixed #1631954.
  • It was not possible to configure basedir as a symlink. Bug fixed #1639735.
  • Replication slave did not report Seconds_Behind_Master correctly when running in multi-threaded slave mode. Bug fixed #1654091 (upstream #84415).
  • DROP TEMPORARY TABLE would create a transaction in binary log on a read-only server. Bug fixed #1668602 (upstream #85258).
  • Creating a compression dictionary with innodb_fake_changes enabled could lead to a server crash. Bug fixed #1629257.

Other bugs fixed: #1660828 (upstream #84786), #1664519 (upstream #84940), #1674299, #1683456, #1670588 (upstream #84173), #1672389, #1674507, #1674867, #1675623, #1650294, #1659224, #1660565, #1662908, #1669002, #1671473, #1673800, #1674284, #1676441, #1676705, #1676847 (upstream #85671), #1677130 (upstream #85678), #1677162, #1678692, #1678792, #1680510 (upstream #85838), #1683993, #1684012, #1684078, #1684264, and #1674281.

Release notes for Percona Server for MySQL 5.6.36-82.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Categories: MySQL

MyRocks and LOCK IN SHARE MODE

MySQL Performance Blog - Thu, 2017-05-11 20:36

In this blog post, we’ll look at MyRocks and the LOCK IN SHARE MODE.

Those who attended the March 30th webinar “MyRocks Troubleshooting” might remember our discussion with Yoshinori on LOCK IN SHARE MODE.

I did more tests, and I can confirm that his words are true: LOCK IN SHARE MODE works in MyRocks.

This quick example demonstrates this. The initial setup:

CREATE TABLE t ( id int(11) NOT NULL, f varchar(100) DEFAULT NULL, PRIMARY KEY (id) ) ENGINE=ROCKSDB; insert into t values(12345, 'value1'), (54321, 'value2');

In session 1:

session 1> begin; Query OK, 0 rows affected (0.00 sec) session 1> select * from t where id=12345 lock in share mode; +-------+--------+ | id | f | +-------+--------+ | 12345 | value1 | +-------+--------+ 1 row in set (0.01 sec)

In session 2:

session 2> begin; Query OK, 0 rows affected (0.00 sec) session 2> update t set f='value3' where id=12345; ERROR HY000: Lock wait timeout exceeded; try restarting transaction

However, in the webinar I wanted to remind everyone about the differences between LOCK IN SHARE MODE  and FOR UPDATE. To do so, I added the former to my “session 2” test for the webinar. Once I did, it ignores the lock set in “session 1”. I can update a row and commit:

session 2> select * from t where id=12345 lock in share mode; +-------+--------+ | id | f | +-------+--------+ | 12345 | value1 | +-------+--------+ 1 row in set (0.00 sec) session 2> update t set f='value3' where id=12345; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 session 2> commit; Query OK, 0 rows affected (0.02 sec)

I reported this behavior here, and also at Percona Jira bugs database: MYR-107. In Facebook, this bug is already fixed.

This test clearly demonstrates that it is fixed in Facebook. In “session 1”:

session1> CREATE TABLE `t` ( -> `id` int(11) NOT NULL, -> `f` varchar(100) DEFAULT NULL, -> PRIMARY KEY (`id`) -> ) ENGINE=ROCKSDB; Query OK, 0 rows affected (0.00 sec) session1> insert into t values(12345, 'value1'), (54321, 'value2'); Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 session1> begin; Query OK, 0 rows affected (0.00 sec) session1> select * from t where id=12345 lock in share mode; +-------+--------+ | id | f | +-------+--------+ | 12345 | value1 | +-------+--------+ 1 row in set (0.00 sec)

And now in another session:

session2> begin; Query OK, 0 rows affected (0.00 sec) session2> select * from t where id=12345 lock in share mode; +-------+--------+ | id | f | +-------+--------+ | 12345 | value1 | +-------+--------+ 1 row in set (0.00 sec) session2> update t set f='value3' where id=12345; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction: Timeout on index: test.t.PRIMARY

If you want to test the fix with the Facebook MySQL build, you need to update submodules to download the patch: git submodule update.

Categories: MySQL

Percona Server for MySQL 5.5.55-38.8 is Now Available

MySQL Performance Blog - Wed, 2017-05-10 17:43

Percona announces the release of Percona Server for MySQL 5.5.55-38.8 on May 10, 2017. Based on MySQL 5.5.55, including all the bug fixes in it, Percona Server for MySQL 5.5.55-38.8 is now the current stable release in the 5.5 series.

Percona Server for MySQL is open-source and free. You can find release details in the 5.5.55-38.8 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

New Features:
  • Percona Server 5.5 packages are now available for Ubuntu 17.04 (Zesty Zapus).
Bugs Fixed:
  • If a bitmap write I/O errors happened in the background log tracking thread while a FLUSH CHANGED_PAGE_BITMAPS is executing concurrently it could cause a server crash. Bug fixed #1651656.
  • Querying TABLE_STATISTICS in combination with a stored function could lead to a server crash. Bug fixed #1659992.
  • Queries from the INNODB_CHANGED_PAGES table would needlessly read potentially incomplete bitmap data past the needed LSN range. Bug fixed #1625466.
  • It was not possible to configure basedir as a symlink. Bug fixed #1639735.

Other bugs fixed: #1688161, #1683456, #1670588 (upstream #84173), #1672389, #1675623, #1660243, #1677156, #1680061, #1680510 (upstream #85838), #1683993, #1684012, #1684025, and #1674281.

Find the release notes for Percona Server for MySQL 5.5.55-38.8 in our online documentation. Report bugs on the launchpad bug tracker.

Categories: MySQL

Percona-Lab/mongodb_consistent_backup: 1.0 Release Explained

MySQL Performance Blog - Wed, 2017-05-10 16:58

In this blog post, I will cover the Percona-Lab/mongodb_consistent_backup tool and the improvements in the 1.0.1 release of the tool.

Percona-Lab/mongodb_consistent_backup

mongodb_consistent_backup is a tool for performing cluster consistent backups on MongoDB clusters or single-replica sets. This tool is open source Python code, developed by Percona and published under our Percona-Lab GitHub repository. Percona-Lab is a place for code projects maintained and supported with only best-effort from Percona.

By considering the entire MongoDB cluster’s shards and individual shard members, mongodb_consistent_backup can backup a cluster with one or many shards to a single point in time. Single-point-in-time consistency of cluster backups is critical to data integrity for any “sharded” database technology, and is a topic often overlooked in database deployments.

This topic is explained in detail by David Murphy in this Percona blog: https://www.percona.com/blog/2016/07/25/mongodb-consistent-backups/.

1.0 Release

mongodb_consistent_backup originally was a single replica set backup script internal to Percona, which morphed into a large multi-threaded/concurrent Python project. It was released to the public (Percona-Lab) with some rough edges.

This release focuses on the efficiency and reliability of the existing components, many of the pain-points in extending, deploying and troubleshooting the tool and adding some small features.

New Features: Config File Overhaul

The tool was moved to use a structured, nested YAML config file instead of the messy config implemented in 0.x.

You can see a full example of this new format at this URL: https://github.com/Percona-Lab/mongodb_consistent_backup/blob/master/conf/mongodb-consistent-backup.example.conf

Here’s an example of a very basic config file that’s using 3 x replica-set config servers as “seed hosts” (a new feature in 1.0!), username+password and the optional Nagios NSCA notification method:

production: host: csReplSet/config01:27019,config02:27019,config03:27019 username: mongodb_consistent_password password: "correct horse battery staple" authdb: admin log_dir: /var/log/mongodb_consistent_backup backup: method: mongodump name: production-eu location: /var/lib/mongodb_consistent_backup archive: method: tar notify: method: nsca nsca: check_host: mongodb-production-eu check_name: "mongodb_consistent_backup" server: nagios01.example.com:5667 upload: method: none

New Features: Logging

Overall there is much more logged in this release, both in “regular” mode and “verbose” mode. A highlight for this release is live logging of the output of mongodump, something that was missing from the 0.x versions of the tool.

Now we can see the progress of the backup of each shard/replset in a cluster! Below we can see the backup of csReplset (a config server replica set) dump many collections and complete its backup. After, we can see the replica sets “test1” and “test2” dumping “wikipedia.pages”.

... [2017-05-05 20:11:05,366] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: done dumping config.settings (1 document) [2017-05-05 20:11:05,367] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: writing config.version to [2017-05-05 20:11:05,372] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: done dumping config.version (1 document) [2017-05-05 20:11:05,373] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: writing config.locks to [2017-05-05 20:11:05,377] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: done dumping config.locks (3 documents) [2017-05-05 20:11:05,378] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: writing config.databases to [2017-05-05 20:11:05,381] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: done dumping config.databases (1 document) [2017-05-05 20:11:05,383] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: writing config.tags to [2017-05-05 20:11:05,385] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: done dumping config.tags (0 documents) [2017-05-05 20:11:05,387] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: writing config.changelog to [2017-05-05 20:11:05,399] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: done dumping config.changelog (112 documents) [2017-05-05 20:11:05,401] [INFO] [MongodumpThread-7] [MongodumpThread:wait:72] csReplSet/172.17.0.1:27019: writing captured oplog to [2017-05-05 20:11:05,578] [INFO] [MongodumpThread-7] [MongodumpThread:run:133] Backup csReplSet/172.17.0.1:27019 completed in 0.71 seconds, 0 oplog changes [2017-05-05 20:11:08,042] [INFO] [MongodumpThread-5] [MongodumpThread:wait:72] test1/172.17.0.1:27017: [........................] wikipedia.pages 636/35080 (1.8%) [2017-05-05 20:11:08,071] [INFO] [MongodumpThread-6] [MongodumpThread:wait:72] test2/172.17.0.1:28017: [........................] wikipedia.pages 878/35118 (2.5%) [2017-05-05 20:11:11,041] [INFO] [MongodumpThread-5] [MongodumpThread:wait:72] test1/172.17.0.1:27017: [#.......................] wikipedia.pages 1853/35080 (5.3%) [2017-05-05 20:11:11,068] [INFO] [MongodumpThread-6] [MongodumpThread:wait:72] test2/172.17.0.1:28017: [#.......................] wikipedia.pages 2063/35118 (5.9%) [2017-05-05 20:11:14,043] [INFO] [MongodumpThread-5] [MongodumpThread:wait:72] test1/172.17.0.1:27017: [##......................] wikipedia.pages 2983/35080 (8.5%) [2017-05-05 20:11:14,075] [INFO] [MongodumpThread-6] [MongodumpThread:wait:72] test2/172.17.0.1:28017: [##......................] wikipedia.pages 3357/35118 (9.6%) [2017-05-05 20:11:17,040] [INFO] [MongodumpThread-5] [MongodumpThread:wait:72] test1/172.17.0.1:27017: [##......................] wikipedia.pages 4253/35080 (12.1%) [2017-05-05 20:11:17,070] [INFO] [MongodumpThread-6] [MongodumpThread:wait:72] test2/172.17.0.1:28017: [###.....................] wikipedia.pages 4561/35118 (13.0%) [2017-05-05 20:11:20,038] [INFO] [MongodumpThread-5] [MongodumpThread:wait:72] test1/172.17.0.1:27017: [###.....................] wikipedia.pages 5180/35080 (14.8%) [2017-05-05 20:11:20,067] [INFO] [MongodumpThread-6] [MongodumpThread:wait:72] test2/172.17.0.1:28017: [###.....................] wikipedia.pages 5824/35118 (16.6%) [2017-05-05 20:11:23,050] [INFO] [MongodumpThread-5] [MongodumpThread:wait:72] test1/172.17.0.1:27017: [####....................] wikipedia.pages 6216/35080 (17.7%) [2017-05-05 20:11:23,072] [INFO] [MongodumpThread-6] [MongodumpThread:wait:72] test2/172.17.0.1:28017: [####....................] wikipedia.pages 6964/35118 (19.8%) ...

Also, while backup data is gathered the status output from each Oplog tailing thread is now logged every 30 seconds (by default):

... [2017-05-05 20:12:09,648] [INFO] [TailThread-2] [TailThread:status:60] Oplog tailer test1/172.17.0.1:27017 status: 256 oplog changes, ts: Timestamp(1494020048, 6) [2017-05-05 20:12:11,033] [INFO] [TailThread-3] [TailThread:status:60] Oplog tailer test2/172.17.0.1:28017 status: 1588 oplog changes, ts: Timestamp(1494020049, 50) [2017-05-05 20:12:22,804] [INFO] [TailThread-4] [TailThread:status:60] Oplog tailer csReplSet/172.17.0.1:27019 status: 43 oplog changes, ts: Timestamp(1494020062, 1) ...

You can now write log files to disk by setting the ‘log_dir’ config variable or ‘–log-dir’ command-line flag. One log file per backup is written to this directory, with a symlink pointing to the latest log file. The previous backup’s log file is automatically compressed with gzip.

New Features: ZBackup

ZBackup is an open-source de-duplication, compression and (optional) encryption tool for archive-like data (similar to backups). Files that are fed into ZBackup are organized at a block-level into pieces called “bundles”. When more files are fed into ZBackup, it can re-use the bundles when it notices the same blocks are being backed up. This approach provides significant savings on disk space (required for many database backups). To add to the savings, all data in ZBackup is compressed using LZMA compression, which generally compresses better than gzip/deflate or zip. ZBackup also supports an optional AES-128 encryption at rest. You enable it by providing a key file to ZBackup that allows it to encode/decode the data.

mongodb_consistent_backup 1.0.0 now supports ZBackup as a new archiving method!

Below is an example of ZBackup used on a small database (about 1GB) that is constantly growing.

This graph compares the size added on disk for seven backups taken 10-minutes apart using two methods. The first method is mongodb_consistent_backup, with mongodump built-in gzip compression (available via the –gzip flag since 3.2) enabled. By default mongodump gzip is enabled by mongodb_consistent_backup (if it’s available), so this is a good “baseline”. The second method is mongodb_consistent_backup with mongodump gzip compression disabled and ZBackup used as the mongodb_consistent_backup archiving method, a post-backup stage in our tool. Notice each backup in the graph after the first only adds 14-18mb to the disk usage, meaning ZBackup was able to recognize similarities in the data.

To try out ZBackup as an archive method, use one of these methods:

  1. Set the field “method” under the “archive” section of your mongodb_consistent_backup config file to “zbackup” (example):
    production:   ...   archive:      method: zbackup   ...
  2. Or, add the command-line flag “archive.method=zbackup” to your command line.

This archive method causes mongodb_consistent_backup to create a subdirectory in the backup location named “mongodb-consistent-backup_zbackup” and import completed backups into ZBackup after the backup stage. This directory contains the ZBackup storage files that it needs to operate, and they should not be modified!

Of course, there are trade-offs. ZBackup adds some additional system resource usage and time to the overall backup AND restore process – both importing and exporting data into ZBackup takes some additional time.

By default ZBackup’s restore uses a very small amount of RAM for cache, so increasing the cache with the “–cache-size” flag may improve restore performance. ZBackup uses threading so more CPUs can also improve performance of backups and restores.

New Features: Docker Container

We now offer a Dockerfile for building mongodb_consistent_backup with all dependencies into a Docker container! The goal for the image is to be as “thin” as possible, and so the build merely downloads a prebuilt binary of the tool and installs dependencies. See: https://github.com/Percona-Lab/mongodb_consistent_backup/blob/master/Dockerfile

Some interesting use cases for a Docker-based deployment of the tool come to mind:

  • Running MongoDB backups using ephemeral containers on Apache Mesos or Kubernetes (with persistent volumes or remote upload)
  • Restricting system resources used by mongodb_consistent_backup via Docker/cgroup’s isolation features
  • Simplified deployment or isolated dependencies (e.g., Python, Mongodump, etc.)

Up-to-date images of mongodb_consistent_backup are available at this Dockerhub URL: https://hub.docker.com/r/timvaillancourt/mongodb_consistent_backup/. This image includes mongodb_consistent_backup, gzip-capable mongodump binaries and latest-stable ZBackup binaries.

To run the latest Dockerhub image:

$ docker run -i timvaillancourt/mongodb_consistent_backup:latest <mongodb_consistent_backup-flags here>

To just list the “help” page (all available options):

$ docker run -i timvaillancourt/mongodb_consistent_backup:latest --help usage: mongodb-consistent-backup [-h] [-c CONFIGPATH] [-e {production,staging,development}] [-V] [-v] [-H HOST] [-P PORT] [-u USER] [-p PASSWORD] [-a AUTHDB] [-n BACKUP.NAME] [-l BACKUP.LOCATION] [-m {mongodump}] [-L LOG_DIR] [--lock-file LOCK_FILE] [--sharding.balancer.wait_secs SHARDING.BALANCER.WAIT_SECS] [--sharding.balancer.ping_secs SHARDING.BALANCER.PING_SECS] [--archive.method {tar,zbackup,none}] [--archive.tar.compression {gzip,none}] [--archive.tar.threads ARCHIVE.TAR.THREADS] [--archive.zbackup.binary ARCHIVE.ZBACKUP.BINARY] [--archive.zbackup.cache_mb ARCHIVE.ZBACKUP.CACHE_MB] [--archive.zbackup.compression {lzma}] ... ...

An example script for running the container with persistent Docker volumes is available here: https://github.com/Percona-Lab/mongodb_consistent_backup/tree/master/scripts

New Features: Multiple Seed Hosts + Config Servers

mongodb_consistent_backup 1.0 introduces the ability to define a list of multiple “seed” hosts, preventing a potential for a single-point of failure in your backups! If a host in the list is unavailable, it will be skipped.

Multiple hosts should be specified with this replica-set URL format, many hosts separated by commas:

    <replica-set>/<host/ip>:<port>,<host/ip>:<port>,…

Or you can specify a comma-separated list without the replica set name for non-replset nodes (eg: mongos or non-replset config servers):

    <host/ip>:<port>,<host/ip>:<port>,…

Also, the functionality to use cluster Config Servers as seed hosts was added. Before version 1.0 a clustered backup needed to use a single mongos router as a seed host to find all shards and cluster members. Sometimes mongos routers can come and go as you scale, making this design brittle.

With this new functionality, mongodb_consistent_backup can use the Cluster Config Servers to map out the cluster, which are usually three times the fairly-static hosts in an infrastructure. This makes the deployment and operation of the tool a bit simpler and more reliable.

Overall Improvements

As mentioned, a focus in this release was improving the existing code. A major refactoring of the code structure of the project was completed in 1.0, and moves the major “phases” or “stages” in the tool to their own Python sub-modules (e.g., “Backup” and “Archive”) that then auto-load their various “methods” like “mongodump” or “Zbackup”.

The code was broken into these high-level stages:

  1. Backup. The stage that gathers the backup of data. During this stage, Oplog tailing and resolving also occur if the backup is for a cluster. More backup methods are coming soon!
  2. Archive. The stage that archives and optionally compresses the backup data. The new ZBackup method also adds de-duplication and encryption ability to this stage.
  3. Upload. The stage that uploads the resulting data to a remote storage. Currently only AWS S3 is supported with Google Cloud Storage and Rsync being added as we speak.
  4. Notify. The stage that notifies external systems of the success/failure of the backup. Currently, our tool only supports Nagios NSCA, with plans for PagerDuty and others to be added.

Some interesting code enhancements include:

  • Reusing of database connections. This reduces the number of connections on seed hosts.
  • Replication heartbeat time (“operational lag”). This is now considered in replica set lag calculations.
  • Added thread safety for oplog tailing threads. This resolves some issues on extremely-overloaded hosts.

Another focus was efficiency and preventing race conditions. The tool should be much less susceptible to error as a result, although if you see any problems we’d like to hear about them on our GitHub “Issues” page.

Lastly, we encourage the open source community to contribute additional functionality to this tool via our GitHub!

Release Notes:

  • 1.0.0
    • Move to dynamic code “Submodules” and subclassing of repeated components
    • Restructuring of YAML config to nested config
    • Safe start/stopping of oplog tailer threads, additional checking on all thread states
    • File-based logging with gzip of old log
    • Oplog tailer ‘oplogReplay’ performance optimization
    • Fixes to oplog durability to-disk
    • Live mongodump output to stdout in realtime
    • Oplog tailer status logging
    • ZBackup archive method: supporting deduplication, compression and option AES encryption
    • Support for list discovery/seed hosts
    • Support configdb servers as cluster seed hosts
    • Fewer (reused) database connections
    • Database connections to use strong write concern
    • Consider replication operational lag in secondary scoring
    • Backup metadata is written for future functionality and troubleshooting
    • mongodb_consistent_backup.Errors custom exceptions for proper exception handling
    • Python PyPi support added
    • Dockerfile support for running under containers
    • Additional log messages
    • Support for MongoDB 3.4 datatypes
    • Significant reworking of existing code for efficiency, reliability and readability

More about our releases can be seen here: https://github.com/Percona-Lab/mongodb_consistent_backup/releases.

Categories: MySQL

MariaDB Handler_icp_% Counters: What They Are, and How To Use Them

MySQL Performance Blog - Tue, 2017-05-09 19:39

In this post we’ll see how MariaDB’s Handler_icp_% counters status counters (Handler_icp_attempts and Handler_icp_matches) measure ICP-related work done by the server and storage engine layers, and how to see if our queries are getting any gains by using them.

These counters (as seen in SHOW STATUS output) are MariaDB-specific. In a later post, we will see how we can get this information in MySQL and Percona Server. This investigation spun off from comments in Michael’s post about the new MariaDB dashboard in PMM. Comments are very useful, so keep them coming!

Categories: MySQL

Percona Server for MongoDB 3.4.4-1.4 is Now Available

MySQL Performance Blog - Tue, 2017-05-09 17:18

Percona announces the release of Percona Server for MongoDB 3.4.4-1.4 on May 9, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.4.4 and includes the following additional changes:

  • #PSMDB-122: Added the percona-server-mongodb-enable-auth.sh script to binary tarball.
  • #PSMDB-127: Fixed cleanup of deleted documents and indexes for MongoRocks. When you upgrade to this release, deferred compaction may occur and cause database size to decrease significantly.
  • #PSMDB-133: Added the wiredTigerCheckpointSizeMB variable, set to 1000 in the configration template for WiredTiger. Valid values are 32 to 2048 (2GB), with the latter being default.
  • #PSMDB-138: Implemented SERVER-23418 for MongoRocks.

Percona Server for MongoDB 3.4.4-1.4 release notes are available in the official documentation.

Categories: MySQL

Chasing a Hung MySQL Transaction: InnoDB History Length Strikes Back

MySQL Performance Blog - Mon, 2017-05-08 19:09

In this blog post, I’ll review how a hung MySQL transaction can cause the InnoDB history length to grow and negatively affect MySQL performance.

Recently I was helping a customer discover why SELECT queries were running slower and slower until the server restarts (which got things back to normal). It took some time to get from that symptom to a final diagnosis. Please follow me on the journey of chasing this strange MySQL behavior!

Symptoms

Changes in the query response time can mean tons of things. We can check everything from the query plan to the disk performance. However, fixing it with a restart is less common. After looking at “show engine innodb status”, I noticed some strange lines:

Trx read view will not see trx with id >= 41271309593, sees < 41268384363 ---TRANSACTION 41271309586, ACTIVE 766132 sec 2 lock struct(s), heap size 376, 0 row lock(s), undo log entries 1 ...

There was a total of 940 transactions like this.

Another insight was the InnoDB transaction history length graph from Percona Monitoring and Management (PMM):

History length of 6 million and growing clearly indicates a problem.

Problem localized

There have been a number of blog posts describing a similar problem: Peter stated in a blog post: “InnoDB transaction history often hides dangerous ‘debt’“. As the InnoDB transaction history grows, SELECTs need to scan more and more previous versions of the rows, and performance suffers. That explains the issue: SELECT queries get slower and slower until restart. Peter also filed this bug: Major regression having many row versions.

But why does the InnoDB transaction history start growing? There are 940 transactions in this state: ACTIVE 766132 sec. MySQL’s process list shows those transactions in “Sleep” state. It turns out that those transactions were “lost” or “hung”. As we can also see, each of those transactions holds two lock structures and one undo record, so they are not committed and not rolled-back. They are sitting there doing nothing. In this case, with the default isolation level REPEATABLE-READ, InnoDB can’t purge the undo records (transaction history) for other transactions until these “hung” transactions are finished.

The quick solution is simple: kill those connections and InnoDB will roll back those transactions and purge transaction history. After killing those 940 transactions, the graph looked like this:

However, several questions remain:

  1. What are the queries inside of this lost transaction? Where are they coming from? The problem is that neither MySQL’s process list nor InnoDB’s status shows the queries for this transaction, as it is not running those queries right now (the process list is a snapshot of what is happening inside MySQL right at this moment)
  2. Can we fix it so that the “hung” transactions don’t affect other SELECT queries and don’t cause the growth of transaction history?
Simulation

As it turns out, it is very easy to simulate this issue with sysbench.

Test preparation

To add some load, I’m using sysbench,16 threads (you can open less, it does not really matter here) and a script for a “write-only” load (running for 120 seconds):

conn=" --db-driver=mysql --mysql-host=localhost --mysql-user=user --mysql-password=password --mysql-db=sbtest " sysbench --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-table-engine=InnoDB --oltp-table-size=1000000 $conn prepare sysbench --num-threads=16 --max-requests=0 --max-time=120 --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --oltp-table-size=1000000 $conn --oltp-test-mode=complex --oltp-point-selects=0 --oltp-simple-ranges=0 --oltp-sum-ranges=0 --oltp-order-ranges=0 --oltp-distinct-ranges=0 --oltp-index-updates=1 --oltp-non-index-updates=0 run

Simulate a “hung” transaction

While the above sysbench is running, open another connection to MySQL:

use test; CREATE TABLE `a` ( `i` int(11) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into a values(1); begin; insert into a values(1); select * from a;

Note: we will need to run the SELECT as a part of this transaction. Do not close the connection.

Watch the history

mysql> select name, count from information_schema.INNODB_METRICS where name like '%hist%'; +----------------------+-------+ | name | count | +----------------------+-------+ | trx_rseg_history_len | 34324 | +----------------------+-------+ 1 row in set (0.00 sec) mysql> select name, count from information_schema.INNODB_METRICS where name like '%hist%'; +----------------------+-------+ | name | count | +----------------------+-------+ | trx_rseg_history_len | 36480 | +----------------------+-------+ 1 row in set (0.01 sec)

We can see it is growing. Now it is time to commit or rollback or even kill our original transaction:

mysql> rollback; ... mysql> select name, count from information_schema.INNODB_METRICS where name like '%hist%'; +----------------------+-------+ | name | count | +----------------------+-------+ | trx_rseg_history_len | 793 | +----------------------+-------+ 1 row in set (0.00 sec)

As we can see, it has purged the history length.

Finding the queries from the hung transactions

There are a number of options to find the queries from that “hung” transaction. In older MySQL versions, the only way is to enable the general log (or the slow query log). Starting with MySQL 5.6, we can use the Performance Schema. Here are the steps:

  1. Enable performance_schema if not enabled (it is disabled on RDS / Aurora by default).
  2. Enable events_statements_history:
    mysql> update performance_schema.setup_consumers set ENABLED = 'YES' where NAME='events_statements_history'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
  3. Run the query to find all transaction started 10 seconds ago (change the number of seconds to match your workload):
    SELECT ps.id as processlist_id, trx_started, trx_isolation_level, esh.EVENT_ID, esh.TIMER_WAIT, esh.event_name as EVENT_NAME, esh.sql_text as SQL, esh.RETURNED_SQLSTATE, esh.MYSQL_ERRNO, esh.MESSAGE_TEXT, esh.ERRORS, esh.WARNINGS FROM information_schema.innodb_trx trx JOIN information_schema.processlist ps ON trx.trx_mysql_thread_id = ps.id LEFT JOIN performance_schema.threads th ON th.processlist_id = trx.trx_mysql_thread_id LEFT JOIN performance_schema.events_statements_history esh ON esh.thread_id = th.thread_id WHERE trx.trx_started < CURRENT_TIME - INTERVAL 10 SECOND AND ps.USER != 'SYSTEM_USER' ORDER BY esh.EVENT_IDG ... PROCESS ID: 1971 trx_started: 2017-05-03 17:36:47 trx_isolation_level: REPEATABLE READ EVENT_ID: 79 TIMER_WAIT: 33767000 EVENT NAME: statement/sql/begin SQL: begin RETURNED_SQLSTATE: 00000 MYSQL_ERRNO: 0 MESSAGE_TEXT: NULL ERRORS: 0 WARNINGS: 0 *************************** 9. row *************************** PROCESS ID: 1971 trx_started: 2017-05-03 17:36:47 trx_isolation_level: REPEATABLE READ EVENT_ID: 80 TIMER_WAIT: 2643082000 EVENT NAME: statement/sql/insert SQL: insert into a values(1) RETURNED_SQLSTATE: 00000 MYSQL_ERRNO: 0 MESSAGE_TEXT: NULL ERRORS: 0 WARNINGS: 0 *************************** 10. row *************************** PROCESS ID: 1971 trx_started: 2017-05-03 17:36:47 trx_isolation_level: REPEATABLE READ EVENT_ID: 81 TIMER_WAIT: 140305000 EVENT NAME: statement/sql/select SQL: select * from a RETURNED_SQLSTATE: NULL MYSQL_ERRNO: 0 MESSAGE_TEXT: NULL ERRORS: 0 WARNINGS: 0
    Now we can see the list of queries from the old transaction (the MySQL query used was taken with modifications from this blog post: Tracking MySQL query history in long running transactions).

At this point, we can chase this issue at the application level and find out why this transaction was not committed. The typical causes:

  • There is a heavy, non-database-related process inside the application code. For example, the application starts a transaction to get a list of images for analysis and then starts an external application to process those images (machine learning or similar), which can take a very long time.
  • The application got an uncaught exception and exited, but the connection to MySQL was not closed for some reason (i.e., returned to the connection pool).

We can also try to configure the timeouts on MySQL or the application so that the connections are closed after “N” minutes.

Changing the transaction isolation level to fix the InnoDB transaction history issue

Now that we know which transaction is holding up the purge process of InnoDB history, we can find this transaction and make changes so it will not “hang”. We can change the transaction isolation level from REPEATABLE READ (default) to READ COMMITTED. In READ COMMITTED, InnoDB does not need to maintain history length when other transactions have committed changes. (More details about different isolation methods and how they affect InnoDB transactions.) That will work in MySQL 5.6 and later. However this doesn’t work in Amazon Aurora (as of now): even with READ COMMITTED isolation level, the history length still grows.

Here is the list of MySQL versions where changing the isolation level fixes the issue

MySQL Version  Transaction isolation  InnoDB History Length MySQL 5.6 repeatable read history is not purged until “hung” transaction finishes MySQL 5.6 read committed (fixed) history is purged Aurora repeatable read history is not purged until “hung” transaction finishes Aurora read committed history is not purged until “hung” transaction finishes


Summary

Hung transactions can cause the InnoDB history length to grow and (surprisingly, on the first glance) affect the performance of other running select queries. We can use the performance schema to chase the “hung” transaction. Changing the MySQL transaction isolation level can potentially help.

Categories: MySQL

How much disk space should I allocate for Percona Monitoring and Management?

MySQL Performance Blog - Thu, 2017-05-04 18:15

I heard a frequent question at last week’s Percona Live conference regarding Percona Monitoring and Management (PMM): How much disk space should I allocate for PMM Server?

First, let’s review the three components of Percona Monitoring and Management that consume non-negligible disk space:

  1. Prometheus data source for the time series metrics
  2. Query Analytics (QAN) which uses Percona Server XtraDB (Percona’s enhanced version of the InnoDB storage engine)
  3. Orchestrator, also backed by Percona Server XtraDB

Of these, you’ll find that Prometheus is generally your largest consumer of disk space. Prometheus hits a steady state of disk utilization once you reach the defined storage.local.retention period. If you deploy Percona Monitoring and Management 1.1.3 (the latest stable version), you’ll be using a retention period of 30 days. “Steady state” in this case means you’re not adding or removing nodes frequently, since each node comes with its own 1k-7k metrics to be scraped. Prometheus stores a one-time series per metric scraped, and automatically trims chunks (like InnoDB pages) from the tail of the time series once they exceed the retention period (so the disk requirement per static list of metrics remains “fixed” for the retention period).

However, if you’re in a dynamic environment with nodes being added and removed frequently, or you’re on the extreme end like these guys who re-deploy data centers every day, steady state for Prometheus may remain an elusive goal. The guidance you find below may help you establish at least a minimum disk provisioning threshold.

QAN is based on a web application and uses Percona Server 5.7.17 as it’s datastore. The Percona QAN agent runs one instance per monitored MySQL server, and obtains queries from either the Slow log or Performance Schema. It performs analysis locally to generate a list of unique queries and their corresponding metrics: min, max, avg, med, and p95. There are dimensions based on Tmp table, InnoDB, Query time, Lock time, etc. Check the schema for a full listing, as there are actually 149 columns on this table (show create table pmm.query_class_metricsG). While the table is wide, it isn’t too long: PMM Demo is ~9mil rows and is approximately 1 row per distinct query per minute per host.

Finally, there is Orchestrator. While the disk requirements for Orchestrator are not zero, they are certainly dwarfed by Prometheus and QAN.  As you’ll read below, Percona’s Orchestrator footprint is a measly ~250MB, which is a rounding error. I’d love to hear other experiences with Orchestrator and how large your InnoDB footprint is for a large or active cluster.

For comparison, here is the resource consumption from Percona’s PMM Demo site:

  • ~47k time series
  • 25 hosts, which is on average ~1,900 time series/host, some are +4k
  • 8-day retention for metrics in Prometheus
  • Prometheus data is ~40GB
    • Which should not increase until we add more host, as this isn’t a dynamic Kubernetes environment
Categories: MySQL

Storing UUID and Generated Columns

MySQL Performance Blog - Wed, 2017-05-03 18:15

A lot of things have been said about UUID, and storing UUID in an optimized way. Now that we have generated columns, we can store the decomposed information inside the UUID and merge it again with generated columns. This blog post demonstrates this process.

First, I used a simple table with one char field that I called uuid_char to establish a base case. I used this table with and without a primary key:

CREATE TABLE uuid_char ( uuid char(36) CHARACTER SET utf8 NOT NULL, ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE uuid_char_pk ( uuid char(36) CHARACTER SET utf8 NOT NULL, PRIMARY KEY (uuid) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

I performed the tests on a local VM over MySQL 5.7.17 for 30 seconds, with only two threads, because I wanted to just compare the executions:

sysbench --oltp-table-size=100000000 --test=/usr/share/doc/sysbench/tests/db/insert_uuid_generated_columns.uuid_char.lua --oltp-tables-count=4 --num-threads=2 --mysql-user=root --max-requests=5000000 --report-interval=5 --max-time=30 --mysql-db=generatedcolumn run

One pair of executions is with the UUID generated by sysbench, which simulates the UUID that comes from the app:

rs = db_query("INSERT INTO uuid_char (uuid) VALUES " .. string.format("('%s')",c_val))

An alternative execution is for when the UUID is generated by the MySQL function uuid():

rs = db_query("INSERT INTO uuid_char (uuid) VALUES (uuid())")

Below we can see the results: 

The inserts are faster without a PK (but only by 5%), and using the uuid() function doesn’t impact performance.

Now, let’s see the alternative method, which is decomposing the UUID. It has four main information sets:

  • Timestamp: this is a number with seven decimals.
  • MAC: the MAC address of the device that creates the UUID
  • Unique value: this value avoids duplicate cases scenarios
  • UUID version: this will always be “1”, as we are going to use version 1. If you are going to use another version, you will need to review the functions that I used.

The structure of the table that we’ll use is:

CREATE TABLE `uuid_generated` ( `timestamp` decimal(18,7) unsigned NOT NULL, `mac` bigint(20) unsigned NOT NULL, `temp_uniq` binary(2) NOT NULL, PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

To understand how a UUID is unwrapped, I used this store procedure (which receives a UUID and inserts it into the table):

CREATE PROCEDURE ins_generated_uuid (uuid char(38)) begin set @hex_timestamp = concat(substring(uuid, 16, 3), substring(uuid, 10, 4), substring(uuid, 1, 8)); set @timestamp = concat(conv(@hex_timestamp,16,10)div 10000000 - (141427 * 24 * 60 * 60),'.',right(conv(@hex_timestamp,16,10),7)); set @mac = conv(right(uuid,12),16,10); set @temp_uniq = unhex(substring(uuid,20,4)); insert into uuid_generated (timestamp,mac,temp_uniq) values (@timestamp,@mac,@temp_uniq); end ;;

Explanation:

  • @hex_timestamp is a temporary variable that collects the timestamp in hexadecimal format from the different sections of the UUID
  • @timestamp transforms the hexadecimal timestamp to a decimal number
  • @mac pulls the last number in the UUID (a MAC) so we can store it in as a bigint
  • @temp_uniq is a value to conserve the uniqueness, which is why we store it as binary and it is at the end of the Primary Key

If I wanted to get the UUID again, I can use these two generated columns:

`hex_timestamp` char(40) GENERATED ALWAYS AS (conv(((`timestamp` * 10000000) + (((141427 * 24) * 60) * 600000000)),10,16)) VIRTUAL, `uuid_generated` char(38) GENERATED ALWAYS AS (concat(right(`hex_timestamp`,8),'-',substr(`hex_timestamp`,4,4),'-1',left(`hex_timestamp`,3),'-',convert(hex(`temp_uniq`) using utf8),'-',lpad(conv(`mac`,10,16),12,'0'))) VIRTUAL,

We performed tests over five scenarios:

  • Without the generated columns, the insert used data generated dynamically
  • Same as before, but we added a char field that stores the UUID
  • With the char field, and adding the generated column
  • We used the store procedure detailed before to insert the data into the table
  • We also tested the performance using triggers

The difference between the Base and the previous table structure with Primary Keys is very small. So, the new basic structure has no impact on performance.

We see that Base and +Char Field have the same performance. So leaving a char field has no performance impact (it just uses more disk space).

Using generated columns impact performance. This is expected, as the columns are generated to validate the type before the row is inserted.

Finally, the use of triggers and store procedure has the same impact in performance.

These are the three structures to the tables:

CREATE TABLE `uuid_generated` ( `timestamp` decimal(18,7) unsigned NOT NULL, `mac` bigint(20) unsigned NOT NULL, `temp_uniq` binary(2) NOT NULL, PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE `uuid_generated_char` ( `timestamp` decimal(18,7) unsigned NOT NULL, `mac` bigint(20) unsigned NOT NULL, `temp_uniq` binary(2) NOT NULL, `uuid` char(38) DEFAULT NULL, PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE `uuid_generated_char_plus` ( `timestamp` decimal(18,7) unsigned NOT NULL, `mac` bigint(20) unsigned NOT NULL, `temp_uniq` binary(2) NOT NULL, `uuid` char(38) DEFAULT NULL, `hex_timestamp` char(40) GENERATED ALWAYS AS (conv(((`timestamp` * 10000000) + (((141427 * 24) * 60) * 600000000)),10,16)) VIRTUAL, `uuid_generated` char(38) GENERATED ALWAYS AS (concat(right(`hex_timestamp`,8),'-',substr(`hex_timestamp`,4,4),'-1',left(`hex_timestamp`,3),'-',convert(hex(`temp_uniq`) using utf8),'-',lpad(conv(`mac`,10,16),12,'0'))) VIRTUAL, PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And this is the trigger:

DROP TRIGGER IF EXISTS ins_generated_uuid; delimiter ;; CREATE TRIGGER ins_uuid_generated BEFORE INSERT ON uuid_generated FOR EACH ROW begin set @hex_timestamp = concat(substring(NEW.uuid, 16, 3), substring(NEW.uuid, 10, 4), substring(NEW.uuid, 1, 8)); set NEW.timestamp = concat(conv(@hex_timestamp,16,10)div 10000000 - (141427 * 24 * 60 * 60),'.',right(conv(@hex_timestamp,16,10),7)); set NEW.mac = conv(right(NEW.uuid,12),16,10); set NEW.temp_uniq = unhex(substring(NEW.uuid,20,4)); end ;; delimiter ;

Conclusions

Decomposing the UUID is an alternative to storing them in order, but it won’t speed up inserts. It is simpler to execute queries over a range of dates, and look at the row for a particular device, as you will be able to use the MAC (it is recommended to add an index for it). Generated columns give you the possibility to build the UUID back in just one string.

Categories: MySQL

Percona University in Europe May 9 and May 11

MySQL Performance Blog - Tue, 2017-05-02 18:20

In 2013 we started Percona University, which consists of technology discussion events held in different cities around the world. The next installments of Percona University in Europe are next week when I fly there for Percona University Berlin (May 9) and Percona University Budapest (May 11). Both events are free to attend, and you are very welcome to join us for either of them.

Below are some questions and answers about why you should attend a Percona University session:

What is Percona University? It is a half-day technical educational event, with a wider program when compared to a traditional meetup. Usually, we include about six hours of talks split with a 30-minute coffee break. We encourage people to join us at any point during these talks – we understand that not everyone can take off a half a day from their work or studies.

What is on the agenda for each of the events? Full agendas and registration forms for the Berlin and Budapest events are available at the indicated links.

Does the word “University” mean that we won’t cover any in-depth topics, and these events would only interest college/university students? No, it doesn’t. We designed Percona University presentations for all kinds of “students,” including professionals with years of database industry experience. The word “University” means that this event series is about educating attendees on technical topics (it’s not a sales-oriented event, it’s about educating the community).

Does Percona University cover only Percona technology? We will definitely mention Percona technology, but we will also focus on real-world technical issues and recommend solutions that work (regardless of whether Percona developed them).

Are there other Percona University events coming up besides Berlin and Budapest? We will hold more Percona University events in different locations in the future. Our events newsletter is a good source of information about when and where they will occur. If you want to partner with Percona in organizing a Percona University event, contact our team. You can also check our list of technical webinars to get further educational insights.

These events are free and low-key! We want them to remain easy to organize in any city of the world. They aren’t meant to look like a full conference (like our Percona Live series). Percona University has a different format – it’s purposefully informal, and designed to be perfect for learning and networking. This is an in-person database community gathering, so feel free to come with interesting cases and tricky questions!

I hope to see many of you at Percona University in Europe, Berlin and Budapest editions!

Categories: MySQL

Webinar Thursday May 4, 2017: Percona Software News and Roadmap Update Q2 2017

MySQL Performance Blog - Mon, 2017-05-01 18:34

Come and listen to Percona CEO Peter Zaitsev on Thursday, May 4, 2017 at 11:00 am (PST) / 2:00 pm (EST) discuss Percona’s software news and roadmap, including Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

Register Now During this webinar, Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services, and finish with a Q&A.

You can register for the webinar here.

Peter Zaitsev, CEO of Percona

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. Fortune and DZone also tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads.

Categories: MySQL

From Percona Live 2017: Thank You, Attendees!

MySQL Performance Blog - Fri, 2017-04-28 21:47

From everyone at Percona and Percona Live 2017, we’d like to send a big thank you to all our sponsors, exhibitors, and attendees at this year’s conference.

This year’s conference was an outstanding success! The event brought the open source database community together, with a technical emphasis on the core topics of MySQL, MariaDB, MongoDB, PostgreSQL, AWS, RocksDB, time series, monitoring and other open source database technologies.

We will be posting tutorial and session presentation slides at the Percona Live site, and all of them should be available shortly. 

Highlights This Year:

Thanks to Our Sponsors!

We would like to thank all of our valuable event sponsors, especially our diamond sponsors Continuent and VividCortex – your participation really makes the show happen.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We look forward to seeing you there!

Categories: MySQL
Syndicate content