MySQL

MongoDB 3.2: elections just got better!

MySQL Performance Blog - Wed, 2016-05-25 19:26
Introduction

In this blog, we’ll review MongoDB 3.2 elections and how they work, as well as what is really new and different in the election protocol.

MongoDB 3.2 revamped its election protocol for increased stability! Exciting times, with smarter and faster elections are here! With this latest release, you will find that replication (and the election protocol) have been improved. Some of the changes include:

  • The addition of electionTimeoutMS
  • WriteConcern  now implies “j:true”
    • Old j:true meant just the primary node
    • New j:true means all involved nodes must ACK the journal
    • j:true means your journal MS will be thirded, and synchronization occurs every 10ms (MMAP) or 33ms (WiredTiger) by default
  • Optime in rs.status now an Object, not a Timestamp

You’ll need to enable the Election Protocol when upgrading MongoDB from an earlier version, while new replSets get it enabled by default.

Election Protocol: what is an election?

Mongo uses a consensus protocol. This means that all nodes must agree who is the most current when handing:

  • Hardware failure
  • Network split
  • Time shifts

New updates allow for faster elections using an (term) electionId to prevent timeout between separate voting rounds. This guarantees there aren’t double (and conflicting) votes while also reducing the time to wait to know a vote completed.

How does it do it?

Elections now have “term” or “vote” identifiers (ID). Terms are used to separate voting rounds. Every vote attempt increments the ID. The ID incrementation prevents a node from double voting in the same term, and makes it easier for nodes to know if a re-vote is needed where before it could be up to 5 minutes!

The protocol timeouts have some new features and behaviors:

  • Now configurable
  • Randomness added to each node
  • Less chance all node timeout at the same time
Normal election process

Below I’m going to walk you through a typical replica set operation. The configuration looks like the following:

In this topology:

  • There are three members
  • All of them are heartbeating to each other
  • There is no arbiter, so you get full high availability (HA)

The following diagram provides a more detailed picture of the interactions:

Notice how replication pulls from the primary to each secondary from the primary – the secondary does all the work. A heartbeat is still shared by all the nodes.

Now let’s see what happens when our primary crashes. It just did!

Nodes will still try to heartbeat to it until two have failed in a short period.

After the failure, things happen quickly.

  1. Secondaries give up on heartbeats
  2. They then vote with each other on who is newest in oplog
  3. If they have > 50% of total voting population they select a new winner

A new Primary is selected, and the heartbeat system is cleaned up.

Replication now gets restarted. If the fatal node comes back online, it’s treated as a secondary once it “catches up” via the oplog.

Stepdown Election Process

The stepdown election process is the same as above, with the following caveats:

  • It’s MUCH faster, as the existing primary “starts” an election
  • There is less chance of the old primary not having data replicated
  • It kills writes while doing election
  • The election process doesn’t wait for heartbeat timeouts

Generally speaking, you should always try to use the stepdown election process. Timeouts are for crashes and failures, not general use.

 

Categories: MySQL

Percona Server 5.6.30-76.3 is now available

MySQL Performance Blog - Wed, 2016-05-25 16:54


Percona
is glad to announce the release of Percona Server 5.6.30-76.3 on May 25, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.6.30, including all the bug fixes in it, Percona Server 5.6.30-76.3 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.30-76.3 milestone on Launchpad.

Bugs Fixed:

  • When Read Free Replication was enabled for TokuDB, and there was no explicit primary key for the replicated TokuDB table, there could be duplicated records in the table on update operation. The fix disables Read Free Replication for tables without an explicit primary key and does rows lookup for UPDATE and DELETE binary log events and issues warning. Bug fixed #1536663 (#950).
  • Attempting to execute a non-existing prepared statement with Response Time Distribution plugin enabled could lead to a server crash. Bug fixed #1538019.
  • TokuDB was using using different memory allocators; this was causing safemalloc warnings in debug builds and crashes because memory accounting didn’t add up. Bug fixed #1546538 (#962).
  • Fixed heap allocator/deallocator mismatch in Metrics for scalability measurement. Bug fixed #1581051.
  • Percona Server is now built with system zlib library instead of the older bundled one. Bug fixed #1108016.
  • Reduced the memory overhead per page in the InnoDB buffer pool. The fix was based on Facebook patch #91e979e. Bug fixed #1536693 (upstream #72466).
  • CREATE TABLE ... LIKE ... could create a system table with an unsupported enforced engine. Bug fixed #1540338.
  • Change buffer merge could throttle to 5% of I/O capacity on an idle server. Bug fixed #1547525.
  • Slave_open_temp_tables would fail to decrement on the slave with a disabled binary log if the master was killed. Bug fixed #1567361.
  • The server will now show a more descriptive error message when Percona Server fails with errno == 22 "Invalid argument", if innodb_flush_method was set to ALL_O_DIRECT. Bug fixed #1578604.
  • Killed connection threads could get their sockets closed twice on shutdown. Bug fixed #1580227.
  • AddressSanitizer build with LeakSanitizer enabled was failing at gen_lex_hash invocation. Bug fixed #1580993 (upstream #80014).
  • apt-cache show command for percona-server-client was showing innotop included as part of the package. Bug fixed #1201074.
  • mysql-systemd would fail with PAM authentication and proxies due to a regression introduced when fixing #1534825 in Percona Server 5.6.29-76.2. Bug fixed #1558312.
  • Upgrade logic for figuring if TokuDB upgrade can be performed from the version on disk to the current version was broken due to a regression introduced when fixing bug #684 in Percona Server 5.6.27-75.0. Bug fixed #717.
  • If ALTER TABLE was run while tokudb_auto_analyze variable was enabled it would trigger auto-analysis, which could lead to a server crash if ALTER TABLE DROP KEY was used because it would be operating on the old table/key meta-data. Bug fixed #945.
  • The TokuDB storage engine with tokudb_pk_insert_mode set to 1 is safe to use in all conditions. On INSERT IGNORE or REPLACE INTO, it tests to see if triggers exist on the table, or replication is active with !BINLOG_FORMAT_STMT before it allows the optimization. If either of these conditions is met, then it falls back to the “safe” operation of looking up the target row first. Bug fixed #952.
  • Bug in TokuDB Index Condition Pushdown was causing ORDER BY DESC to reverse the scan outside of the WHERE bounds. This would cause a query to hang in a sending data state for several minutes in some environments with large amounts of data (3 billion records) if the ORDER BY DESC statement was used. Bugs fixed #988, #233, and #534.

Other bugs fixed: #1399562 (upstream #75112), #1510564 (upstream #78981), #1496282 (#964), #1496786 (#956), #1566790, #1552673, #1567247, #1567869, #718, #914, #970, #971, #972, #976, #977, #981, #637, and #982.

Release notes for Percona Server 5.6.30-76.3 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Categories: MySQL

Looking inside the MySQL 5.7 document store

MySQL Performance Blog - Tue, 2016-05-24 22:36

In this blog, we’ll look at the MySQL 5.7 document store feature, and how it is implemented.

Document Store

MySQL 5.7.12 is a major new release, as it contains quite a number of new features:

  1. Document store and “MongoDB” like NoSQL interface to JSON storage
  2. Protocol X / X Plugin, which can be used for asynchronous queries (I will write about it as well)
  3. New MySQL shell

Peter already wrote the document store overview; in this post, I will look deeper into the document store implementation. In my next post, I will demonstrate how to use document store for Internet of Things (IoT) and event logging.

Older MySQL 5.7 versions already have a JSON data type, and an ability to create virtual columns that can be indexed. The new document store feature is based on the JSON datatype.

So what is the document store anyway? It is an add-on to a normal MySQL table with a JSON field. Let’s take a deep dive into it and see how it works.

First of all: one can interface with the document store’s collections using the X Plugin (default port: 33060). To do that:

  1. Enable X Plugin and install MySQL shell.
  2. Login to a shell:
    mysqlsh --uri root@localhost
  3. Run commands (JavaScript mode, can be switched to SQL or Python):
    mysqlsh --uri root@localhost Creating an X Session to root@localhost:33060 Enter password: No default schema selected. Welcome to MySQL Shell 1.0.3 Development Preview Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help', 'h' or '?' for help. Currently in JavaScript mode. Use sql to switch to SQL mode and execute queries. mysql-js> db = session.getSchema('world_x') <Schema:world_x> mysql-js> db.getCollections() { "CountryInfo": <Collection:CountryInfo> }

Now, how is the document store’s collection different from a normal table? To find out, I’ve connected to a normal MySQL shell:

mysql world_x Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2396 Server version: 5.7.12 MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> show create table CountryInfo *************************** 1. row *************************** Table: CountryInfo Create Table: CREATE TABLE `CountryInfo` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL, PRIMARY KEY (`_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec) mysql> show tables; +-------------------+ | Tables_in_world_x | +-------------------+ | City | | Country | | CountryInfo | | CountryLanguage | +-------------------+ 4 rows in set (0.00 sec)

So the document store is actually an InnoDB table with one field: doc json + Primary key, which is a generated column.

As we can also see, there are four tables in the world_x database, but db.getCollections() only shows one. So how does MySQL distinguish between a “normal” table and a “document store” table? To find out, we can enable the general query log and see which query is being executed:

$ mysql -e 'set global general_log=1' $ tail /var/log/general.log 2016-05-17T20:53:12.772114Z 186 Query SELECT table_name, COUNT(table_name) c FROM information_schema.columns WHERE ((column_name = 'doc' and data_type = 'json') OR (column_name = '_id' and generation_expression = 'json_unquote(json_extract(`doc`,''$._id''))')) AND table_schema = 'world_x' GROUP BY table_name HAVING c = 2 2016-05-17T20:53:12.773834Z 186 Query SHOW FULL TABLES FROM `world_x`

As you can see, every table that has a specific structure (doc JSON or specific generation_expression) is considered to be a JSON store. Now, how does MySQL translate the .find or .add constructs to actual MySQL queries? Let’s run a sample query:

mysql-js> db.getCollection("CountryInfo").find('Name= "United States"').limit(1) [ { "GNP": 8510700, "IndepYear": 1776, "Name": "United States", "_id": "USA", "demographics": { "LifeExpectancy": 77.0999984741211, "Population": 278357000 }, "geography": { "Continent": "North America", "Region": "North America", "SurfaceArea": 9363520 }, "government": { "GovernmentForm": "Federal Republic", "HeadOfState": "George W. Bush", "HeadOfState_title": "President" } } ] 1 document in set (0.02 sec)

and now look at the slow query log again:

2016-05-17T21:02:21.213899Z 186 Query SELECT doc FROM `world_x`.`CountryInfo` WHERE (JSON_EXTRACT(doc,'$.Name') = 'United States') LIMIT 1

We can verify that MySQL translates all document store commands to SQL. That also means that it is 100% transparent to the existing MySQL storage level and will work with other storage engines. Let’s verify that, just for fun:

mysql> alter table CountryInfo engine=MyISAM; Query OK, 239 rows affected (0.06 sec) Records: 239 Duplicates: 0 Warnings: 0 mysql-js> db.getCollection("CountryInfo").find('Name= "United States"').limit(1) [ { "GNP": 8510700, "IndepYear": 1776, "Name": "United States", "_id": "USA", "demographics": { "LifeExpectancy": 77.0999984741211, "Population": 278357000 }, "geography": { "Continent": "North America", "Region": "North America", "SurfaceArea": 9363520 }, "government": { "GovernmentForm": "Federal Republic", "HeadOfState": "George W. Bush", "HeadOfState_title": "President" } } ] 1 document in set (0.00 sec) 2016-05-17T21:09:21.074726Z 2399 Query alter table CountryInfo engine=MyISAM 2016-05-17T21:09:41.037575Z 2399 Quit 2016-05-17T21:09:43.014209Z 186 Query SELECT doc FROM `world_x`.`CountryInfo` WHERE (JSON_EXTRACT(doc,'$.Name') = 'United States') LIMIT 1

Worked fine!

Now, how about the performance? We can simply take the SQL query and run explain:

mysql> explain SELECT doc FROM `world_x`.`CountryInfo` WHERE (JSON_EXTRACT(doc,'$.Name') = 'United States') LIMIT 1 *************************** 1. row *************************** id: 1 select_type: SIMPLE table: CountryInfo partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 239 filtered: 100.00 Extra: Using where 1 row in set, 1 warning (0.00 sec)

Hmm, it looks like it is not using an index. That’s because there is no index on Name. Can we add one? Sure, we can add a virtual column and then index it:

mysql> alter table CountryInfo add column Name varchar(255) -> GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.Name'))) VIRTUAL; Query OK, 0 rows affected (0.12 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table CountryInfo add key (Name); Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> explain SELECT doc FROM `world_x`.`CountryInfo` WHERE (JSON_EXTRACT(doc,'$.Name') = 'United States') LIMIT 1 *************************** 1. row *************************** id: 1 select_type: SIMPLE table: CountryInfo partitions: NULL type: ref possible_keys: name key: name key_len: 768 ref: const rows: 1 filtered: 100.00 Extra: NULL 1 row in set, 1 warning (0.00 sec)

That is really cool! We have added an index, and now the original query starts using it. Note that we do not have to reference the new field, the MySQL optimizer is smart enough to translate the (JSON_EXTRACT(doc,'$.Name') = 'United States' to an index scan on the virtual column.

But please note: JSON attributes are case-sensitive. If you will use (doc,'$.name') instead of (doc,'$.Name') it will not generate an error, but will simply break the search and all queries looking for “Name” will return 0 rows.

Finally, if you looked closely at the output of db.getCollection("CountryInfo").find('Name= "United States"').limit(1) , you noticed that the database has outdated info:

"government": { "GovernmentForm": "Federal Republic", "HeadOfState": "George W. Bush", "HeadOfState_title": "President" }

Let’s change “George W. Bush” to “Barack Obama” using the .modify clause:

mysql-js> db.CountryInfo.modify("Name = 'United States'").set("government.HeadOfState", "Barack Obama" ); Query OK, 1 item affected (0.02 sec) mysql-js> db.CountryInfo.find('Name= "United States"') [ { "GNP": 8510700, "IndepYear": 1776, "Name": "United States", "_id": "USA", "demographics": { "LifeExpectancy": 77.0999984741211, "Population": 278357000 }, "geography": { "Continent": "North America", "Region": "North America", "SurfaceArea": 9363520 }, "government": { "GovernmentForm": "Federal Republic", "HeadOfState": "Barack Obama", "HeadOfState_title": "President" } } ] 1 document in set (0.00 sec)

Conclusion

Document store is an interesting concept and a good add-on on top of the existing MySQL JSON feature. Using the new .find/.add/.modify methods instead of the original SQL statements can be convenient in some cases.

Some might ask, “why do you want to use document store and store information in JSON inside the database if it is relational anyway?” Storing data in JSON can be quite useful in some cases, for example:

  • You already have a JSON (i.e., from external feeds) and need to store it anyway. Using the JSON datatype will be more convenient and more efficient.
  • You have a flexible schema, typical for the Internet of Things for example, where some sensors might only send temperature data, some might send temperature/humidity/light (but light information is only recorded during the day), etc. Storing it in the JSON format can be more convenient so that you do not have to declare all possible fields in advance, and do not have to run “alter table” if a new sensor starts sending new types of data.

In the next two blog posts, I will show how to use document store for Internet of Things / event streaming, and how to use X Protocol for asynchronous queries in MySQL.

Categories: MySQL

pt-online-schema-change (if misused) can’t save the day

MySQL Performance Blog - Tue, 2016-05-24 18:27

In this blog post we’ll discuss pt-online-schema-change, and how to correctly use it.

Always use pt-osc?

Altering large tables can be still a problematic DBA task, even now after we’ve improved Online DDL features in MySQL 5.6 and 5.7. Some ALTER types are still not online, or sometimes just too expensive to execute on busy production master.

So in some cases, we may want to apply an ALTER first on slaves, taking them out of traffic pool one by one and bringing them back after the ALTER is done. In the end, we can promote one of the already altered slaves to be new master, so that the downtime/maintenance time is greatly minimized. The ex-master can be altered later, without affecting production. Of course, this method works best when the schema change is backwards-compatible.

So far so good, but there is another problem. Let’s say the table is huge, and ALTER takes a lot of time on the slave. When it is a DML-blocking type ALTER (perhaps when using MySQL 5.5.x or older, etc.), there will be a long slave lag (if the table is being written by replication SQL thread at the same time, for example). So what do we to speed up the process and avoid the altered slave lag? One temptation that could tempt you is why not use pt-online-schema-change on the slave, which can do the ALTER in a non-blocking fashion?

Let’s see how it that would work. I need to rebuild big table on slave using MySQL version 5.6.16 (“null alter” was made online since 5.6.17) to reclaim disk space after some rows are deleted.

This example demonstrates the process (db1 is the master, db2 is the slave):

[root@db2 ~]# pt-online-schema-change --execute --alter "engine=innodb" D=db1,t=sbtest1 No slaves found.  See --recursion-method if host db2 has slaves. Not checking slave lag because no slaves were found and --check-slave-lag was not specified. Operation, tries, wait: analyze_table, 10, 1 copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `db1`.`sbtest1`... Creating new table... Created new table db1._sbtest1_new OK. Altering new table... Altered `db1`.`_sbtest1_new` OK. 2016-05-16T10:50:50 Creating triggers... 2016-05-16T10:50:50 Created triggers OK. 2016-05-16T10:50:50 Copying approximately 591840 rows... Copying `db1`.`sbtest1`:  51% 00:28 remain (...)

The tool is still working during the operation, and the table receives some writes on master:

db1 {root} (db1) > update db1.sbtest1 set k=k+2 where id<100; Query OK, 99 rows affected (0.06 sec) Rows matched: 99  Changed: 99  Warnings: 0 db1 {root} (db1) > update db1.sbtest1 set k=k+2 where id<100; Query OK, 99 rows affected (0.05 sec) Rows matched: 99  Changed: 99  Warnings: 0

which are applied on slave right away, as the table allows writes all the time.

(...) Copying `db1`.`sbtest1`:  97% 00:01 remain 2016-05-16T10:51:53 Copied rows OK. 2016-05-16T10:51:53 Analyzing new table... 2016-05-16T10:51:53 Swapping tables... 2016-05-16T10:51:53 Swapped original and new tables OK. 2016-05-16T10:51:53 Dropping old table... 2016-05-16T10:51:53 Dropped old table `db1`.`_sbtest1_old` OK. 2016-05-16T10:51:53 Dropping triggers... 2016-05-16T10:51:53 Dropped triggers OK. Successfully altered `db1`.`sbtest1`.

Done! No slave lag, and the table is rebuilt. But . . . let’s just make sure data is consistent between the master and slave (you can use pt-table-checksum):

db1 {root} (db1) > select max(k) from db1.sbtest1 where id<100; +--------+ | max(k) | +--------+ | 392590 | +--------+ 1 row in set (0.00 sec) db2 {root} (test) > select max(k) from db1.sbtest1 where id<100; +--------+ | max(k) | +--------+ | 392586 | +--------+ 1 row in set (0.00 sec)

No, it is not! The slave is clearly missing the updates that happened during a pt-osc run. Why?

The explanation is simple. The pt-online-schema-change relies on triggers. The triggers are used to make the writes happening to the original table also populate to the temporary table copy, so that both tables are consistent when the final table switch happens at the end of the process. So what is the problem here? It’s the binary log format: in ROW based replication, the triggers are not fired on the slave! And my master is running in ROW mode:

db1 {root} (db1) > show variables like 'binlog_format'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | binlog_format | ROW | +---------------+-------+ 1 row in set (0.01 sec)

So, if I used pt-online-schema-change on the master, the data inconsistency problem doesn’t happen. But using it on the slave is just dangerous!

Conclusion

Whenever you use pt-online-schema-change, make sure you are not executing it on a slave instance. For that reason, I escalated this bug report: https://bugs.launchpad.net/percona-toolkit/+bug/1221372. Also in many cases, using a normal ALTER will work well enough. As in my example, to rebuild the table separately on each slave in lockless mode, I would just need to upgrade to the more recent 5.6 version.

BTW, if you’re wondering about Galera replication (used in Percona XtraDB Cluster, etc.) since it also uses a ROW-based format, it’s not a problem. The pt-osc triggers are created in all nodes thanks to synchronous write-anywhere replication nature. It does not matter which node you start pt-online-schema-change on, and which other nodes your applications writes on at the same time. No slaves, no problem!

Categories: MySQL

Webinar Thursday May 26: Troubleshooting MySQL hardware resource usage

MySQL Performance Blog - Tue, 2016-05-24 15:46

Join Sveta on Thursday, May 26, 2016, at 10 am PDT (UTC-7) for her webinar Troubleshooting MySQL hardware resource usage.

MySQL does not just run on its own. It stores data on disk, and stores data and temporarily results in memory. It uses CPU resources to perform operations, and a network to communicate with its clients.

In this webinar, we’ll discuss common resource usage issues, how they affect MySQL Server performance, and methods to find out how resources are being used. We will employ both OS-level tools, and new features in Performance Schema that provide detailed information on what exactly is happening inside MySQL Server.

Register for the webinar here.

Sveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015.

Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle.

She is the author of the book MySQL Troubleshooting and JSON UDF Functions for MySQL.

Categories: MySQL

Take Percona’s one-click high availability poll

MySQL Performance Blog - Mon, 2016-05-23 20:47

Wondering what high availability (HA) solutions are most popular? Take our high availability poll below!

HA is always a hot topic. The reality is that if your data is not available, your customers cannot do business with you. In fact, estimates show the average cost of downtime is about $5K per minute. With an average outage taking 40 minutes to correct, you could be looking at a potential cost of $200K if your MySQL instance goes down. Whether your database is on premise, or in public or private clouds, it is critical that your database deployment does not have a potentially devastating single point of failure.

Please take a few seconds and answer the following poll. It will help the community get an idea of how companies are approaching HA in their critical database environments.

If you’re using other solutions or have specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Categories: MySQL

Percona disabling TLSv1.0 May 31st 2016

MySQL Performance Blog - Mon, 2016-05-23 18:09

As of May 31st, 2016, we will be disabling TLSv1.0 support on www.percona.com, repo.percona.com, etc.

This is ahead of the PCI changes that will affect the June 30th 2016 deprecation the TLSv1.0 protocol. (PDF)

What does this mean for you the user?

Based on analysis of our IDS logs, this will affect around 6.32% of requests. As of May 31st, such requests will present an error when trying to negotiate a TLS connection.

Users are advised to update their clients accordingly. SSLabs provides a good test for browsers, though this does not support command line tools. Going forward, we will only support TLSv1.1 and TLSv1.2.

These changes come a little over a year from our previous SSL overhaul, and are part of our ongoing effort to ensure the security of our users.

Thank you for your time. Please leave any questions in the comments section, or email us at security(at)percona.com.

 

 

Categories: MySQL

Percona XtraBackup 2.4.3 is now available

MySQL Performance Blog - Mon, 2016-05-23 13:56


Percona
is glad to announce the GA release of Percona XtraBackup 2.4.3 on May 23rd, 2016. Downloads are available from our download site and from apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups

New Features:

  • Percona XtraBackup has implemented new --reencrypt-for-server-id option. Using this option allows users to start the server instance with different server_id from the one the encrypted backup was taken from, like a replication slave or a Galera node. When this option is used, xtrabackup will, as a prepare step, generate a new master key with ID based on the new server_id, store it into keyring file and re-encrypt the tablespace keys inside of tablespace headers.

Bugs Fixed:

  • Running DDL statements on Percona Server 5.7 during the backup process could in some cases lead to failure while preparing the backup. Bug fixed #1555626.
  • MySQL 5.7 can sometimes skip redo logging when creating an index. If such ALTER TABLE is being issued during the backup, the backup would be inconsistent. xtrabackup will now abort with an error message if such ALTER TABLE has been done during the backup. Bug fixed #1582345.
  • .ibd files for remote tablespaces were not copied back to the original location pointed by the .isl files. Bug fixed #1555423.
  • When called with insufficient parameters, like specifying the empty --defaults-file option, Percona XtraBackup could crash. Bug fixed #1566228.
  • The documentation states that the default value for –ftwrl-wait-query-type is all, however it was update. Changed the default value to reflect the documentation. Bug fixed #1566315.
  • When –keyring-file-data option was specified, but no keyring file was found, xtrabackup would create an empty one instead of reporting an error. Bug fixed #1578607.
  • If ALTER INSTANCE ROTATE INNODB MASTER KEY was run at the same time when xtrabackup --backup was bootstrapping it could catch a moment when the key was not written into the keyring file yet and xtrabackup would overwrite the keyring with the old copy of a keyring, so the new key would be lost. Bug fixed #1582601.
  • The output of the --slave-info option was missing an apostrophe. Bug fixed #1573371.

Release notes with all the bugfixes for Percona XtraBackup 2.4.3 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Categories: MySQL

Introduction to Troubleshooting Performance – Troubleshooting Slow Queries webinar: Q & A

MySQL Performance Blog - Fri, 2016-05-20 20:50

In this blog, I will provide answers to the Q & A for the Troubleshooting Slow Queries webinar.

First, I want to thank you for attending the April 28 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: I’ve heard that is a bad idea to use select *; what do you recommend?

A: When I used SELECT * in my slides, I wanted to underline the idea that sometimes you need to select all columns from the table. There is nothing bad about it if you need them. SELECT * is bad when you need only a few columns from the table. In this case, you retrieve more data than needed, which affects performance. Another issue that  SELECT * can cause is if you hard-code the statement into your application, then change table definition; the application could start retrieving columns in wrong order and output (e.g., email instead of billing address). Or even worse, it will try to access a non-existent index in the result set array. The best practice is to explicitly enumerate all columns that your application needs.

Q: I heard that using index_field length will affect the indexing principle during query execution (e.g., one field is varchar and its length is not fixed, some values have short text, some values have long text. at this time). If we use this field as indexing, what happens?

A: I assume you are asking about the ability to create an index with lengths smaller than the column length? They work as follows:

Assume you have a TEXT  field which contains these user questions:

  1. I’ve heard that is a bad idea to use select * what do you recommend?
  2. I heard that using index_field length will affect the indexing principle during query execution (e.g., one field is varchar and its length is not fixed, some values have short text, some values have long text. at this time). If we use this field as indexing, what happens?
  3. ….

Since this is a TEXT  field you cannot create and index on it without specifying its length, so you need to make the index as minimal as possible to uniquely identify questions. If you create an index with length 10 it will contain:

  1. I’ve heard
  2. I heard th

You will index only those parts of questions that are not very distinct from each other, and do not contain useful information about what the question is. You can create index of length 255:

  1. I’ve heard that is a bad idea to use select * what do you recommend?
  2. I heard that using index_field length will affect the indexing principle during query execution (e.g., one field is varchar and its length is not fixed, some values have short text, some values have long text. at this time). If we use this field as index

In this case, the index includes the whole first question and almost all the second question. This makes the index too large and requires us to use more disk space (which causes more IO). Also, information from the second question is probably too much.

If make index of length 75, we will have:

  1. I’ve heard that is a bad idea to use select * what do you recommend?
  2. I heard that using index_field length will affect the indexing principle du

This is more than enough for the first question and gives a good idea of what is in the second question. It also potentially will have enough unique entries to make its cardinality look more like the cardinality of real data distribution.

To conclude: choosing the correct index length is something that requires practice and analysis of your actual data. Try to make them as short as possible, but long enough so that the number of unique entries in the index will be similar to a number of unique entries in the table.

Q: Which view can we query to see stats?

A: Do you mean index statistics? SHOW INDEX FROM table_name will do it.

Q: We have an InnoDB table with 47 fields (mostly text); some are ft-indexed. I tried to do an alter table, and it ran for 24 hours. What is the best way to run an alter table to add one extra field? The table has 1.9 M rows and 47 columns with many indexes.

A: Adding a column requires a table copy. Therefore, the speed of this operation depends on the table size and speed of your disk. If you are using version 5.6 and later, adding a column would not block parallel queries (and therefore is not a big deal). If you are using an older version, you can always use the pt-online-schema-change utility from Percona Toolkit. However, it will run even more slowly than the regular ALTER TABLE. Unfortunately, you cannot speed up the execution of ALTER TABLE much. The only thing that you can do is to use a faster disk (with options, tuned to explore speed of the disk).

However, if you do not want to have this increased IO affect the production server, you can alter the table on the separate instance, then copy tablespace to production and then apply all changes to the original table from the binary logs. The steps will be something like:

  1. Ensure you use option innodb_file_per_table  and the big table has individual tablespace
  2. Ensure that binary log is enabled
  3. Start a new server (you can also use an existent stand-by slave).
  4. Disable writes to the table
  5. Record the binary log position
  6. Copy the tablespace to the new server as described here.
  7. Enable writes on the production server
  8. Run ALTER TABLE on the new server you created in step 2 (it will still take 24 hours)
  9. Stop writes to the table on the production server
  10. Copy the tablespace, altered in step 7
  11. Apply all writes to this table, which are in the binary logs after position, recorded in step 4.
  12. Enable writes to the table

This scenario will take even more time overall, but will have minimal impact on the production server

Q: If there is a compound index like index1(emp_id,date), will the following query be able to use index? “select * from table1 where emp_id = 10”

A: Yes. At least it should.

Q: Are filesort and temporary in extended info for explain not good?

A: Regarding filesort: it depends. For example, you will always have the word filesort” for tables which perform ORDER BY and cannot use an index for ORDER BY. This is not always bad. For example, in this query:

mysql> explain select emp_no, first_name from employees where emp_no <20000 order by first_nameG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: employees partitions: NULL type: range possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: NULL rows: 18722 filtered: 100.00 Extra: Using where; Using filesort 1 row in set, 1 warning (0,01 sec)

the primary key used to resolve rows and filesort were necessary and not avoidable. You can read about different filesort algorithms here.

Regarding Using temporary: this means what during query execution temporary table will be created. This is can be not good, especially if the temporary table is large and cannot fit into memory. In this case, it would be written to disk and slow down operations. But, again, sometimes creating temporary tables in not avoidable, for example, if you have both GROUP BY and ORDER BY clauses which list columns differently as stated in the user manual.

Q: Is key_len length more of a good thing for query execution?

A: key_len field is not NULL for all queries that use and index, and just shows the length of the key part used. It is not good or bad, it is just for information. You can use this information, for example, to identify which part of combined index is used to resolve the query.

Q: Does an alter query go for an optimizer check?

A: No. You can check it either by enabling optimizer trace, running ALTER and find what trace is empty. Or by enabling the debug option and searching the resulting trace for optimize.

Q: A query involves four columns that are all individually covered by an index. The optimizer didn’t merge indexes because of cost, and even didn’t choose the composite index I created.

A: This depends on the table definition and query you used. I cannot provide a more detailed answer based only on this information.

Q cont.: Finally, only certain composite indexes were suitable, the column order in the complex index mattered a lot. Why couldn’t the optimizer merge the four individual single column indexes, and why did the order of the columns in the composite index matter?

A: Regarding why the optimizer could not merge four indexes, I need to see how the table is defined and which data is in these indexed columns. Regarding why the order of the columns in the composite index matters, it depends on the query. Why the optimizer can use an index, say, on (col1, col2) where the conditions col1=X AND col2=Y and col2=Y AND col2=X for the case when you use OR, the order is important. For example, for the condition col1=X OR col2=Y, where the part col1=X is always executed and the part col2=Y  is executed only when col1=X is false. The same logic applies to queries like SELECT col1 WHERE col2=Y ORDER BY col3. See the user manual for details.

Q: When I try to obtain the optimizer trace on the console, the result is cut off. Even if I redirect the output to a file, how to overcome that?

A: Which version of MySQL Server do you use? The TRACE column is defined as longtext NOT NULL, and should not cause such issues. If it does with a newer version, report a bug at http://bugs.mysql.com/.

Q: Are there any free graphical visualizers for either EXPLAIN or the optimizer TRACE output?

A: There is graphical visualizer for EXPLAIN in MySQL Workbench. But it works with online data only: you cannot run it on EXPLAIN output, saved into a file. I don’t know about any visualizer for the optimizer TRACE output. However, since it is JSON you can simply save it to file and open in web browser. It will allow a better view than if opened in simple text editor.

Q: When do you use force index instead of use index hints?

A: According to user manual “USE INDEX (index_list) hint tells MySQL to use only one of the named indexes to find rows in the table” and “FORCE INDEX  hint acts like USE INDEX (index_list), with the addition that a table scan is assumed to be very expensive . . . a table scan is used only if there is no way to use one of the named indexes to find rows in the table.” This means that when you use USE INDEX, you are giving a hint for the optimizer to prefer a particular index to others, but not enforcing index usage if the optimizer prefers a table scan, while FORCE INDEX requires using the index. I myself use only FORCE and IGNORE  index hints.

Q: Very informative session. I missed the beginning part. Are you going to distribute the recoded session later?

A: Yes. As usual slides and recording available here.

Categories: MySQL

Percona XtraDB Cluster 5.6.29-25.15 is now available

MySQL Performance Blog - Fri, 2016-05-20 14:06


Percona
is glad to announce the new release of Percona XtraDB Cluster 5.6 on May 20, 2016. Binaries are available from the downloads area or our software repositories.

Percona XtraDB Cluster 5.6.29-25.15 is now the current release, based on the following:

All of Percona software is open-source and free, and all the details of the release can be found in the 5.6.29-25.15 milestone at Launchpad.

For more information about relevant Codership releases, see this announcement.

Bugs Fixed:

  • Node eviction in the middle of SST now causes the node to shut down properly.
  • After an error during node startup, the state is now marked unsafe only if SST is required.
  • Fixed data inconsistency during multi-insert auto-increment workload on async master with binlog-format=STATEMENTwhen a node begins async slave with wsrep_auto_increment_control=ON.
  • Fixed crash when a prepare statement is aborted (due to a conflict with applier) and then replayed.
  • Removed a special case condition in wsrep_recover() that would not happen under normal conditions.
  • Percona XtraDB Cluster no longer fails during SST, if a node reserves a very large amount of memory for InnoDB buffer pool.
  • If the value of wsrep_cluster_address is not valid, trying to create a slave thread will now generate a warning instead of an error, and the thread will not be created.
  • Fixed error with loading data infile (LDI) into a multi-partitioned table.
  • The wsrep_node_name variable now defaults to host name.
  • Starting mysqld with unknown option now fails with a clear error message, instead of randomly crashing.
  • Optimized the operation of SST and IST when a node fails during startup.
  • The wsrep_desync variable can now be enabled only after a node is synced with cluster. That is, it cannot be set during node bootup configuration).
  • Fixed crash when setting a high flow control limit (fc_limit) and the recv queue fills up.
  • Only the default 16 KB page size (innodb_page_size=16384) is accepted until the relevant upstream bug is fixed by Codership (see https://github.com/codership/galera/issues/398). All other sizes will report Invalid page size and shut down (the server will not start up).
  • If a node is executing RSU/FTWRL, explicit desync of the node will not happen until the implicit desync action is complete.
  • Fixed multiple bugs in the test suite to improve quality assurance.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

Fixing MySQL scalability problems with ProxySQL or thread pool

MySQL Performance Blog - Thu, 2016-05-19 20:58

In this blog post, we’ll discuss fixing MySQL scalability problems using either ProxySQL or thread pool.

In the previous post I showed that even MySQL 5.7 in read-write workloads is not able to maintain throughput. Oracle’s recommendation to play black magic with innodb_thread_concurrency and innodb_spin_wait_delay doesn’t always help. We need a different solution to deal with this scaling problem.

All the conditions are the same as in my previous run, but I will use:

  • ProxySQL limited to 200 connections to MySQL. ProxySQL has a capability to multiplex incoming connections; with this setting, even with 1000 connections to the proxy it will maintain only 200 connections to MySQL.
  • Percona Server with enabled thread pool, and a thread pool size of 64

You can see final results here:

There are good and bad sides for both solutions. With ProxySQL, there is a visible overhead on lower numbers of threads, but it keeps very stable throughput after 200 threads.

With Percona Server thread pool, there is little-to-no overhead if the number of connections is less than thread pool size, but after 200 threads it falls behind ProxySQL.

There is chart with response times

I would say the correct solution depends on your setup:

  • If you already use or plan to use ProxySQL, you may use it to prevent MySQL from saturation
  • If you use Percona Server, you might consider trying to adjust the thread pool

Summary https://github.com/Percona-Lab-results/201605-OLTP-RW-proxy-threadpool/blob/master/summary-OLTP-RW-proxy.md.

 

Categories: MySQL

Webinar Tuesday, May 24: Understanding how your MongoDB schema affects scaling, and when to consider sharding for help

MySQL Performance Blog - Thu, 2016-05-19 20:02

Please join David Murphy on Tuesday, May 24 at 10 am PDT (UTC-7) as he presents “Understanding how your MongoDB schema affects scaling, and when to consider sharding for help.”

David will discuss the pros and cons of a few MongoDB schema design patterns on a stand-alone machine, and then will look at how sharding affects them.  He’ll examine what assumptions did you make that could cause havoc on your CPU, memory and network during a scatter gather.   This webinar will help answer the questions:

  • Would you still use the same schema if you knew you were going to shard?
  • Are your fetches using the same shard, or employing parallelism to boost performance?
  • Are you following the golden rules of schema design?

Register for this webinar here.

David Murphy, MongoDB Practice Manager

David joined Percona in October 2015 as Practice Manager for MongoDB. Before that, David joined the ObjectRocket by Rackspace team as the Lead DBA in Sept 2013. With the growth involved with any recently acquired startup, David’s role covered a wide range of evangelism, research, run book development, knowledge base design, consulting, technical account management, mentoring and much more. Before the world of MongoDB, David was a MySQL and NoSQL architect at Electronic Arts working with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.

Categories: MySQL

Percona Server for MongoDB 3.0.11-1.6 is now available

MySQL Performance Blog - Thu, 2016-05-19 16:10

Percona is pleased to announce the release of Percona Server for MongoDB 3.0.11-1.6 on May 19, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.0.11-1.6 is an enhanced, open source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.0 protocol and drivers. Based on MongoDB 3.0.11, it extends MongoDB with MongoRocks and PerconaFT storage engines, as well as features like external authentication and audit logging. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: The MongoRocks storage engine is still under development. There is currently no officially released version of MongoRocks that can be recommended for production.

This release includes all changes from MongoDB 3.0.11. Additionally, the following fixes were made:

  • Fixed memory over-allocation
  • PSMDB-56: Additional fixes related to this previously fixed bug.

The release notes are available in the official documentation.

 

Categories: MySQL

Percona Server 5.5.49-37.9 is now available

MySQL Performance Blog - Thu, 2016-05-19 15:39


Percona is glad to announce the release of Percona Server 5.5.49-37.9 on May 19, 2016. Based on MySQL 5.5.49, including all the bug fixes in it, Percona Server 5.5.49-37.9 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.49-37.9 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Percona Server is now built with system zlib library instead of the older bundled one. Bug fixed #1108016.
  • CREATE TABLE ... LIKE ... could create a system table with an unsupported enforced engine. Bug fixed #1540338.
  • The server will now show a more descriptive error message when Percona Server fails with errno == 22 "Invalid argument", if innodb_flush_method was set to ALL_O_DIRECT. Bug fixed #1578604.
  • apt-cache show command for percona-server-client was showing innotop included as part of the package. Bug fixed #1201074.
  • mysql-systemd would fail with PAM authentication and proxies due to regression introduced when fixing bug #1534825 in Percona Server 5.5.48-37.8. Bug fixed #1558312.

Other bugs fixed: #1578625 (upstream #81295), bug fixed #1553166, and bug fixed #1578303 (upstream #81324).

The release notes for Percona Server 5.5.49-37.9 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Categories: MySQL

Where is the MySQL 5.7 root password?

MySQL Performance Blog - Wed, 2016-05-18 17:26

In this blog, we’ll discuss how to find the MySQL 5.7 root password.

While new MySQL software security features are always welcome, they can impact use and performance. Now by default, MySQL 5.7 creates a password for the root user (among other changes) so the installation itself can be considered secure. It’s a necessary change, but it has confused some customers and users. I see a lot of people on social networks (like Twitter) asking about this change.

Where is my root password?

The answer depends on the way you have installed MySQL 5.7 or Percona Server 5.7. I am going to show where to find the password depending on the installation method and the distribution used. For all these examples, I assume this is a new installation and you are using the default my.cnf.

Centos/Redhat – RPM Packages.

The password is not shown on screen during the installation. It is in the error log. The autogenerated my.cnf contains this line:

log-error=/var/log/mysqld.log

So, there is our password:

# cat /var/log/mysqld.log | grep "temporary password" 2016-05-16T07:09:49.796912Z 1 [Note] A temporary password is generated for root@localhost: 8)13ftQG5OYl

Debian/Ubuntu

During the packages installation, you get a prompt asking for the root password. If you don’t set it up, MySQL’s root user is created without a password. We can read the following line in package installation output:

2016-05-16T07:27:21.532619Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.

but it is configured with the auth_socket plugin. You will only be able to connect using the UNIX socket, therefore any attempt to connect using your local IP or the network fails. Later on, you can change the password to allow connections from the network (as explained in this blog post).

All distributions – Binary tarball

mysql_install_db has been deprecated since MySQL 5.7.6. You need to use mysqld to initialize all system databases (like mysql, it contains the users and password). You have two ways of doing it:

–initialize: this is the default and recommended option. It will create a mysql database including a random password that will be written in the error log.

# tail -n1 /var/log/mysql/error.log 2016-05-16T07:47:58.199154Z 1 [Note] A temporary password is generated for root@localhost: wzgds/:Kf2,g

If you don’t have error-log directive configured, or any my.cnf at all, then it will be in the datadir with host_name.err name.

–initialize-insecure: datadir will be initialized without setting a random password to the root user.

# tail -n1 /var/log/mysql/error.log 2016-05-16T07:51:28.506142Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.

Conclusion

Unfortunately, more security can also add more confusion. Depending on the installation method and distribution, the MySQL 5.7 root password process varies a lot, so keep an eye on the error log after every installation and also watch the installation process output shown on screen. In case you are really lost (or you have removed the error log for some reason), you can still start mysqld with --skip-grant-tables to access the database and change the password.

Categories: MySQL

Webinar Thursday May 19, 2016: MongoDB administration for MySQL DBA

MySQL Performance Blog - Wed, 2016-05-18 16:21

Please join Alexander Rubin, Percona Principal Consultant, for his webinar MongoDB administration for MySQL DBA on Thursday, May 19 at 10 am PDT (UTC-7).

If you are a MySQL DBA and want to learn MongoDB quickly – this webinar is for you. MySQL and MongoDB share similar concepts so it will not be hard to get up to speed with MongoDB.

In this talk I will explain the following MongoDB administration concepts:
  • Day to day operations for MongoDB
  • Storage engines and differences with MySQL storage engines
  • Databases, collections and documents
  • Replication in MongoDB and the difference with MySQL replication
  • Sharding in MongoDB
  • Backups in MongoDB

In the webinar, each slide will show a MySQL concept or operation (on the left) and the corresponding MongoDB one (on the right).

Register here.

Alexander Rubin, Principal Consultant

Alexander joined Percona in 2013. Alexander has worked with MySQL since 2000 as a DBA and Application Developer. Before joining Percona, he was a MySQL principal consultant for over seven years (started with MySQL AB in 2006, then Sun Microsystems and then Oracle). He helped many customers design large, scalable and highly available MySQL systems and optimize MySQL performance. Alexander also helped customers design Big Data stores with Apache Hadoop and related technologies.

Categories: MySQL

MySQL 5.7 read-write benchmarks

MySQL Performance Blog - Tue, 2016-05-17 17:22

In this post, we’ll look at the results from some MySQL 5.7 read-write benchmarks.

In my past blogs I’ve posted benchmarks on MySQL 5.5 / 5.6 / 5.7 in OLTP read-only workloads. For example:

Now, it is time to test some read-write transactional workloads. I will again use sysbench, and my scripts and configs are available here: https://github.com/Percona-Lab-results/201605-OLTP-RW.

A short description of the setup:

  • The client (sysbench) and server are on different servers, connected via 10Gb network
  • CPU: 56 logical CPU threads servers Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • sysbench ten tables x 10mln rows, Pareto distribution
  • OS: Ubuntu 15.10 (Wily Werewolf)
  • Kernel 4.2.0-30-generic
  • The storage device is Samsung SM863 SATA SSD, single device, with ext4 filesystem
  • MySQL versions: 5.7.12 , 5.6.30 and 5.5.48

InnoDB holds all data in memory, and InnoDB log files are big enough, so there are only IO writes (which happen in the background) and there is no pressure from InnoDB on the IO subsystem.

The results looked like the following:

The vertical line shows the variability of the throughput (standard deviation).

To show the difference for a lower numbers of threads, here is chart with relative performance normalized by MySQL 5.7 (MySQL 5.7 = 1 in the following chart):

So we can finally see significant improvements in MySQL 5.7, it scales much better in read-write workloads than previous versions.

In the lower numbers of threads, however, MySQL 5.7 throughput is still behind MySQL 5.5 and MySQL 5.6, this is where slower single thread performance in MySQL 5.7 and longer execution paths show themselves. The problem with low threads read-write performance is replication. I wonder how a slave 5.7 performs comparing to 5.6 slave – I am going to run this benchmark soon.

Another point to keep in mind is that we still see a “bell shape,” even for MySQL 5.7. After 430 threads, the throughput drops off a cliff. Despite Oracle’s claims that there is no need for a thread pool anymore, this is not the case – I am not able to prevent a throughput drop using magic tuning with innodb_thread_concurrency and innodb_spin_wait_delay. No matter what, MySQL 5.7 is not able to maintain throughput on a high amount of threads (1000+) for this workload.

What can be done in this case? I have two solutions: Percona Server with thread pool functionality, or ProxySQL with connection multiplexing. I will show these results in the next post.

 

 

 

Categories: MySQL

MySQL “Got an error reading communication packet” errors

MySQL Performance Blog - Mon, 2016-05-16 16:32

In this blog post, we’ll discuss the possible reasons for MySQL “Got an error reading communication packet” errors, and how to address them.

In Percona’s managed services, we often receive customer questions on communication failure errors – where customers are faced with intermittent “Got an error reading communication packets” messages. I thought this topic deserved blog post so we can discuss possible reasons for this error, and how to remedy this problem. I hope this will help readers on how to investigate and resolve this problem.

First of all, whenever a communication error occurs it increments the status counter for either Aborted_clients or Aborted_connects, which describe the number of connections that were aborted because the client died without closing the connection properly and the number of failed attempts to connect to MySQL server (respectively). The possible reasons for both errors are numerous (see the Aborted_clients increments or Aborted_connects increments sections in the MySQL manual).

In the case of log_warnings>1, MySQL also writes this information to the error log (shown below):

[Warning] Aborted connection 305628 to db: 'db' user: 'dbuser' host: 'hostname' (Got an error reading communication packets) [Warning] Aborted connection 305627 to db: 'db' user: 'dbuser' host: 'hostname' (Got an error reading communication packets)

In this case, MySQL increments the status counter for Aborted_clients, which could mean:

  • The client connected successfully but terminated improperly (and may relate to not closing the connection properly)
  • The client slept for longer than the defined wait_timeout or interactive_timeout seconds (which ends up causing the connection to sleep for wait_timeout seconds and then the connection gets forcibly closed by the MySQL server)
  • The client terminated abnormally or exceeded the max_allowed_packet for queries

The above is not an all-inclusive list.Now, how to identify what causing this problem and how to remedy this problem.

How do we identify what caused this problem, and how do we fix it?

To be honest, aborted connection errors are not easy to diagnose. But in my experience, it’s related to network/firewall issues most of the time. We usually investigate those issues with the help of Percona toolkit scripts, i.e. pt-summary / pt-mysql-summary / pt-stalk. The outputs from those scripts can be very helpful.

Some of the reasons can be:

  • A high rate of connections sleeping inside MySQL for hundred of seconds is one of the symptoms that applications aren’t closing connections after doing work, and instead relying on the wait_timeout to close them. I strongly recommend changing the application logic to properly close connections at the end of an operation.
  • Check to make sure the value of max_allowed_packet is high enough, and that your clients are not receiving a “packet too large” message. This situation aborts the connection without properly closing it.
  • Another possibility is TIME_WAIT. I’ve noticed many TIME_WAIT notifications from the netstat, so I would recommend confirming the connections are well managed to close on the application side.
  • Make sure the transactions are committed (begin and commit) properly, so that once the application is “done” with the connection it is left in a clean state.
  • You should ensure that client applications do not abort connections. For example, if PHP has option max_execution_time set to 5 seconds, increasing connect_timeout would not help because PHP will kill the script. Other programming languages and environments can have similar safety options.
  • Another cause for delay in connections is DNS problems. Check if you have skip-name-resolve enabled, and if hosts are authenticated against their IP address instead their hostname.
  • One way to find out where your application is misbehaving is to add some logging to your code that will save the application actions along with the MySQL connection ID. With that, you can correlate it to the connection number from the error lines. Enable the Audit log plugin, which logs connections and query activity, and check the Percona Audit Log Plugin as soon as you hit a connection abort error. You can check for the audit log to identify which query is the culprit. If you can’t use the Audit plugin for some reason, you can consider using the MySQL general log – however, this can be risky on a loaded server. You should enable the general log for at least a few minutes. While it puts a heavy burden on the server, the errors tend to happen fairly often, so you should be able to collect the needed data before the log grows too large. I recommend enabling the general log with an -f tail, then disable the general log when you see the next warning in the log. Once you find the query from the aborted connection, identify which piece of your application issues that query and co-relate the queries with portions of your application.
  • Try increasing the net_read_timeout and net_write_timeout values for MySQL and see if that reduces the number of errors. net_read_timeout is rarely the problem unless you have an extremely poor network. Try tweaking those values, however, because in most cases a query is generated and sent as a single packet to the server, and applications can’t switch to doing something else while leaving the server with a partially received query. There is a very detailed blog post on this topic from our CEO, Peter Zaitsev.

Aborted connections happen because a connection was not closed properly. The server can’t cause aborted connections unless there is a networking problem between the server and the client (like the server is half duplex, and the client is full duplex) – but that is the network causing the problem, not the server. In any case, such problems should show up as errors on the networking interface. To be extra sure, check the ifconfig -a  output on the MySQL server to check if there are errors.

Another way to troubleshoot this problem is via tcpdump. You can refer to this blog post on how to track down the source of aborted connections. Look for potential network issues, timeouts and resource issues with MySQL.

I found this blog post useful in explaining how to use tcpdump on busy hosts. It provides help for tracking down the TCP exchange sequence that led to the aborted connection, which can help you figure out why the connection broke.

For network issues, use a ping to calculate the round trip time (RTT) between machine where mysqld is located and the machine from where the application makes requests. Send a large file (1GB or more) to and from client and server machines, watch the process using tcpdump, then check if an error occurred during transfer. Repeat this test few times. I also found this from my colleague Marco Tusa useful: Effective way to check network connection.

One other idea I can think of is to capture the netstat -s output along with a timestamp after every N seconds (e.g., 10 seconds so you can relate netstat -s output of BEFORE and AFTER an aborted connection error from the MySQL error log). With the aborted connection error timestamp, you can co-relate it with the netstat sample captured as per a timestamp of netstat, and watch which error counters increased under the TcpExt section of netstat -s.

Along with that, you should also check the network infrastructure sitting between the client and the server for proxies, load balancers, and firewalls that could be causing a problem.

Conclusion:
I’ve tried to cover communication failure errors, and how to identify and fix the possible aborted connections. Take into account, faulty ethernets, hubs, switches, cables, and so forth can cause this issue as well. You must replace the hardware itself to properly diagnose these issues.

Categories: MySQL

Benchmark MongoDB with sysbench

MySQL Performance Blog - Fri, 2016-05-13 16:17

In this blog post, we’ll discuss how to benchmark MongoDB with sysbench.

In an earlier post, I mentioned our use of sysbench-mongodb (via this fork) to run benchmarks of MongoDB servers. I now want to share our work extending sysbench to make it work with MongoDB.

If you’re not familiar with sysbench, it’s a great project developed by Alexey Kopytov that lets you run different types of benchmarks (referred to as “tests” by the tool), including database benchmarks. The database tests are implemented in Lua scripts, which means you can customize them as needed (or even write new ones from scratch) – something useful for simulating specific workloads.

All of the database tests in sysbench assume an SQL-based database, so instead of trying to shoehorn MongoDB tests into this framework I modified the connect/disconnect functions to handle MongoDB, and then implemented new functions specific for this database.

You can find the work (which is still in progress but usable, and in fact currently used by us in benchmarks) on the dev-mongodb-support-1.0 branch of our sysbench fork.

To use it, you just need to specify the –mongo-url argument (others too, as needed, but this is the one that must be present for sysbench to detect a MongoDB test is requested), and then provide the path to the Lua script you want to run. The following is an example:

sysbench --mongo-write-concern=1 --mongo-url="mongodb://localhost" --mongo-database-name=sbtest --test=sysbench/sysbench/tests/mongodb/oltp.lua --oltp_table_size=60000000 --oltp_tables_count=16 --num-threads=512 --rand-type=pareto --report-interval=10 --max-requests=0 --max-time=600 --oltp-point-selects=10 --oltp-simple-ranges=1 --oltp-sum-ranges=1 --oltp-order-ranges=1 --oltp-distinct-ranges=1 --oltp-index-updates=1 --oltp-non-index-updates=1 --oltp-inserts=1 run

To build this branch, you’ll first need to build and install (or otherwise obtain) the mongo-c-driver project, as that is what we use to connect to MongoDB. Once that’s done, building is just a matter of running the following commands from the repo’s root:

./autogen.sh ./configure make sudo make install #optionally

The changes should not affect the other database tests in sysbench, though I have only verified that the MySQL ones continue to work.

Right now, the workload from sysbench-mongodb is implemented in Lua scripts (oltp.lua), and work is in progress to allow freeform operations to be created with new Lua scripts (by providing functions that take JSON as the argument). As an alternative, you may want to check out this much-less-tested (and currently unstable) branch based on luamongo. It already supports the creation of arbitrary workloads in Lua. In this case, you also need to build luamongo, which is included.

With either branch, you can add new tests by implementing new Lua scripts (though the dev-mongodb-support-1.0 branch still needs a few functions implemented on the C side to support arbitrary operations from the Lua side).

We think there are still some types of operations needed to improve sysbench’s usefulness for MongoDB, such as queries involving arrays, union, the $in operator, geospatial operators, and in place updates.

We hope you find this useful, and we welcome suggestions and bug reports to improve it.

Happy benchmarking!

Categories: MySQL

ProxySQL versus MaxScale for OLTP RO workloads

MySQL Performance Blog - Thu, 2016-05-12 17:52

In this blog post, we’ll discuss ProxySQL versus MaxScale for OLTP RO workloads.

Continuing my series of READ-ONLY benchmarks (you can find the other posts here: https://www.percona.com/blog/2016/04/07/mysql-5-7-sysbench-oltp-read-results-really-faster/ and https://www.percona.com/blog/2016/03/28/mysql-5-7-primary-key-lookup-results-is-it-really-faster), in this post I want to see how much overhead a proxy adds. At this

In my opinion, there are only two solid proxy software options for MySQL at the moment: ProxySQL and MaxScale. In the past, there was also MySQL Proxy, but it is pretty much dead for now. Its replacement, MySQl Router, is still in the very early stages and seriously lacks any features that would compete with ProxySQL and MaxScale. This will most likely change in the future – when MySQL Router adds more features, I will reevaluate them then!

To test the proxies, I will start with a very simple setup to gauge basic performance characteristics. I will use a sysbench client and proxy running on the same box. Sysbench connects to the proxy via local socket (for minimal network and TCP overhead), and the proxy is connected to a remote MySQL via a 10Gb network. This way, the proxy and sysbench share the same server resources.

Other parameters:

  • CPU: 56 logical CPU threads servers Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • sysbench ten tables x 10mln rows, Pareto distribution
  • OS: Ubuntu 15.10 (Wily Werewolf)
  • MySQL 5.7
  • MaxScale version 1.4.1
  • ProxySQL version 1.2.0b

You can find more details about benchmarks, scripts and configs here: https://github.com/Percona-Lab/benchmark-results/tree/201603-mysql55-56-57-RO/remote-OLTP-proxy-may.

An important parameter to consider is how much of the CPU resources you allocate for a proxy. Both ProxySQL and MaxScale allow you to configure how many threads they can use to process user requests and to route queries. I’ve found that 16 threads for ProxySQL 8 threads for  MaxScale is optimal (I will also show 16 threads for MaxScale in this). Both proxies also allow you to setup simple load-balancing configurations, or to work in read-write splitting mode. In this case, I will use simple load balancing, since there are no read-write splitting requirements in a read-only workload).

ProxySQL

First result: How does ProxySQL perform compared to vanilla MySQL 5.7?

As we can see, there is a noticeable drop in performance with ProxySQL. This is expected, as ProxySQL does extra work to process queries. What is good though is that ProxySQL scales with increasing user connections.

One of the tricks that ProxySQL has is a “fast-forward” mode, which minimizes overhead from processing (but as a drawback, you can’t use many of the other features). Out of curiosity, let’s see how the “fast-forward” mode performs:

MaxScale

Now let’s see what happens with MaxScale. Before showing the next chart, let me not it contains “error bars,” which are presented as vertical bars. Basically, an “error bar” shows a standard deviation: the longer the bar, the more variation was observed during the experiment. We want to see less variance, as it implies more stable performance.

Here are results for MaxScale versus ProxySQL:

We can see that with lower numbers of threads both proxies are nearly similar, but MaxScale has a harder time scaling over 100 threads. On average, MaxScale’s throughput is worse, and there is a lot of variation. In general, we can see that MaxScale demands more CPU resources and uses more of the CPU per request (compared to ProxySQL). This holds true if we run MaxScale with 16 threads (instead of 8):

MaxScale with 16 threads does not handle the workload well, and there is a lot of variation along with some visible scalability issues.

To summarize, here is a chart with relative performance (vanilla MySQL 5.7 is shown as 1):

While this chart does show that MaxScale has less overhead from 1-6 threads, it doesn’t scale as user load increases.

Categories: MySQL
Syndicate content