MySQL

Percona Monitoring and Management 1.0.1 Beta

MySQL Performance Blog - Fri, 2016-06-10 20:05

Percona is glad to announce the release of Percona Monitoring and Management 1.0.1 Beta on 10 June, 2016.

Like prior versions, PMM is distributed through Docker Hub and is free to download. Full instructions for download and installation of the server and client are available in the documentation.

Notable changes to the tool include:

  • Grafana 3.0
  • Replaced custom web server with NGINX
  • Eliminated most of the ports for PMM server container (now only two – 9001 and configurable 80)
  • Updated to the latest versions of Prometheus, exporters, QAN agent
  • Added mongodb_exporter
  • Added MongoDB dashboards
  • Replaced prom-config-api with Consul
  • Improvements to pmm-admin and ability to set server address with the port
  • Added “Server Summary” with aggregated query metrics to QAN app
  • MySQL dashboard updates, added “MySQL InnoDB Metrics Advanced” dashboard
The new server summary in PMM Beta 1.0.1

 

 

Metric rates in Query Analytics

 

 

Available Dashboards in Metrics

 

Full documentation is available, and includes details on installation and architecture, and a demonstration of the tool has been set up at pmmdemo.percona.com.

We have also implemented forums for the discussion of PMM.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

Percona XtraDB Cluster 5.7 beta is now available

MySQL Performance Blog - Thu, 2016-06-09 16:04

Percona is glad to announce the release of Percona XtraDB Cluster 5.7.11-4beta-25.14.2 on June 9, 2016. Binaries are available from the downloads area or our software repositories.

NOTE: This beta release is only available from the testing repository. It is not meant for upgrade from Percona XtraDB Cluster 5.6 and earlier versions. Only a fresh installation is supported.

Percona XtraDB Cluster 5.7.11-4beta-25.14.2 is based on the following:

This is the first beta release in the Percona XtraDB Cluster 5.7 series. It includes all changes from upstream releases and the following changes:

  • Percona XtraDB Cluster 5.7 does not include wsrep_sst_xtrabackup. It has been replaced by wsrep_sst_xtrabackup_v2.
  • The wsrep_mysql_replication_bundle variable has been removed.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

Using MySQL 5.7 Document Store with Internet of Things (IoT)

MySQL Performance Blog - Wed, 2016-06-08 17:17

In this blog post, I’ll discuss how to use MySQL 5.7 Document Store to track data from Internet of Things (IoT) devices.

Using JSON in MySQL 5.7

In my previous blog post, I’ve looked into MySQL 5.7.12 Document Store. This is a brand new feature in MySQL 5.7, and many people are asking when do I need or want to use the JSON or Document Store interface?

Storing data in JSON may be quite useful in some cases, for example:

  • You already have a JSON (i.e., from external feeds) and need to store it anyway. Using the JSON datatype will be more convenient and more efficient.
  • For the Internet of Things, specifically, when storing events from sensors: some sensors may send only temperature data, some may send temperature, humidity and light (but light information is only recorded during the day), etc. Storing it in JSON format may be more convenient in that you don’t have to declare all possible fields in advance, and do not have to run “alter table” if a new sensor starts sending new types of data.

Internet of Things

In this blog post, I will show an example of storing an event stream from Particle Photon. Last time I created a device to measure light and temperature and stored the results in MySQL. Particle.io provides the ability to use its own MQTT server and publish events with:

Spark.publish("temperature", String(temperature)); Spark.publish("humidity", String(humidity)); Spark.publish("light", String(light));

Then, I wanted to “subscribe” to my events and insert those into MySQL (for further analysis). As we have three different metrics for the same device, we have two basic options:

  1. Use a field per metric and create something like this: device_id int, temperature double, humidity double, light double
  2. Use a record per metric and have something like this: device_id int, event_name varchar(255), event_data text (please see this Internet of Things, Messaging and MySQL blog post for more details)

The first option above is not flexible. If my device starts measuring the soil temperature, I will have to “alter table add column”.

Option two is better in this regard, but I may significantly increase the table size as I have to store the name as a string for each measurement. In addition, some devices may send more complex metrics (i.e., latitude and longitude).

In this case, using JSON for storing metrics can be a better option. In this case, I’ve also decided to try Document Store as well.

First, we will need to enable X Plugin and setup the NodeJS / connector. Here are the steps required:

  1. Enable X Plugin in MySQL 5.7.12+, which uses a different port (33060 by default)
  2. Download and install NodeJS (>4.2) and mysql-connector-nodejs-1.0.2.tar.gz (follow the Getting Started with Connector/Node.JS guide).
    # node --version v4.4.4 # wget https://dev.mysql.com/get/Downloads/Connector-Nodejs/mysql-connector-nodejs-1.0.2.tar.gz # npm install mysql-connector-nodejs-1.0.2.tar.gz
    Please note: on older systems you will probably need to upgrade the nodejs version (follow the Installing Node.js via package manager guide).

Storing Events from Sensors

Particle.io provides you with an API that allows you to subscribe to all public events (“events” are what sensors send). The API is for NodeJS, which is really convenient as we can use NodeJS for MySQL 5.7.12 Document Store as well.

To use the Particle API, install the particle-api-js module:

$ npm install particle-api-js

I’ve created the following NodeJS code to subscribe to all public events, and then add the data (in JSON format) to a document store:

var mysqlx = require('mysqlx'); var Particle = require('particle-api-js'); var particle = new Particle(); var token = '<place your token here>' var mySession = mysqlx.getSession({ host: 'localhost', port: 33060, dbUser: 'root', dbPassword: '<place your pass here>' }); process.on('SIGINT', function() { console.log("Caught interrupt signal. Exiting..."); process.exit() }); particle.getEventStream({ auth: token}).then(function(stream) { stream.on('event', function(data) { console.log(data); mySession.then(session => { session.getSchema("iot").getCollection("event_stream") .add( data ) .execute(function (row) { // can log something here }).catch(err => { console.log(err); }) .then( function (notices) { console.log("Wrote to MySQL: " + JSON.stringify(notices)) }); }).catch(function (err) { console.log(err); process.exit(); }); }); }).catch(function (err) { console.log(err.stack); process.exit(); });

How it works:

  • particle.getEventStream({ auth: token}) gives me the stream of events. From there I can subscribe to specific event names, or to all public events using the generic name “events”: stream.on(‘event’, function(data).
  • function(data) is a callback function fired when a new event is ready. The event has JSON type “data.” From there I can simply insert it to a document store: .add( data ).execute() will insert the JSON data into the event_stream document store.

One of the reasons I use document store here is I do not have to know what is inside the event data. I do not have to parse it, I simply throw it to MySQL and analyze it later. If the format of data will change in the future, my application will not break.

Inside the data stream

Here is the example of running the above code:

{ data: 'Humid: 49.40 Temp: 25.00 *C Dew: 13.66 *C HeatI: 25.88 *C', ttl: '60', published_at: '2016-05-20T19:30:51.433Z', coreid: '2b0034000947343337373738', name: 'log' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["a3058c16-15db-0dab-f349-99c91a00"]}} { data: 'null', ttl: '60', published_at: '2016-05-20T19:30:51.418Z', coreid: '50ff72...', name: 'registerdev' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["eff0de02-726e-34bd-c443-6ecbccdd"]}} { data: '24.900000', ttl: '60', published_at: '2016-05-20T19:30:51.480Z', coreid: '2d0024...', name: 'Humid 2' } { data: '[{"currentTemp":19.25},{"currentTemp":19.19},{"currentTemp":100.00}]', ttl: '60', published_at: '2016-05-20T19:30:52.896Z', coreid: '2d002c...', name: 'getTempData' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["5f1de278-05e0-6193-6e30-0ebd78f7"]}} { data: '{"pump":0,"salt":0}', ttl: '60', published_at: '2016-05-20T19:30:51.491Z', coreid: '55ff6...', name: 'status' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["d6fcf85f-4cba-fd59-a5ec-2bd78d4e"]}}

(Please note: although the stream is public, I’ve tried to anonymize the results a little.)

As we can see the “data” is JSON and has that structure. I could have implemented it as a MySQL table structure (adding published_at, name, TTL and coreid as separate fields). However, I would have to depend on those specific fields and change my application if those fields changed. We also see examples of how the device sends the data back: it can be just a number, a string or another JSON.

Analyzing the results

Now I can go to MySQL and use SQL (which I’ve used for >15 years) to find out what I’ve collected. First, I want to know how many device names I have:

mysql -A iot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 3289 Server version: 5.7.12 MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> select count(distinct json_unquote(doc->'$.name')) from event_stream; +---------------------------------------------+ | count(distinct json_unquote(doc->'$.name')) | +---------------------------------------------+ | 1887 | +---------------------------------------------+ 1 row in set (5.47 sec)

That is slow! As described in my previous post, I can create a virtual column and index for doc->’$.name’ to make it faster:

mysql> alter table event_stream add column name varchar(255) -> generated always as (json_unquote(doc->'$.name')) virtual; Query OK, 0 rows affected (0.17 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table event_stream add key (name); Query OK, 0 rows affected (3.47 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> show create table event_stream *************************** 1. row *************************** Table: event_stream Create Table: CREATE TABLE `event_stream` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL, `name` varchar(255) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.name'))) VIRTUAL, UNIQUE KEY `_id` (`_id`), KEY `name` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 1 row in set (0.00 sec) mysql> select count(distinct name) from event_stream; +----------------------+ | count(distinct name) | +----------------------+ | 1820 | +----------------------+ 1 row in set (0.67 sec)

How many beers left?

Eric Joyce has published a Keg Inventory Counter that uses a Particle Proton device to measure the amount of beer in a keg by 12oz pours. I want to see what was the average and the lowest amount of beer per day:

mysql> select date(json_unquote(doc->'$.published_at')) as day, -> avg(json_unquote(doc->'$.data')) as avg_beer_left, -> min(json_unquote(doc->'$.data')) as min_beer_left -> from event_stream -> where name = 'Beers_left' -> group by date(json_unquote(doc->'$.published_at')); +------------+--------------------+---------------+ | day | avg_beer_left | min_beer_left | +------------+--------------------+---------------+ | 2016-05-13 | 53.21008358996988 | 53.2 | | 2016-05-18 | 52.89973045822105 | 52.8 | | 2016-05-19 | 52.669233854792694 | 52.6 | | 2016-05-20 | 52.60644257702987 | 52.6 | +------------+--------------------+---------------+ 4 rows in set (0.44 sec)

Conclusion

UDocument Store can be very beneficial if an application is working with a JSON field and does not know or does not care about its structure. In this post, I’ve used the “save to MySQL and analyze later” approach here. We can then add virtual fields and add indexes if needed.

Categories: MySQL

Choosing MySQL High Availability Solutions

MySQL Performance Blog - Tue, 2016-06-07 20:25

In this blog post we’ll look at various MySQL high availability solutions, and examine their pluses and minuses.

High availability environments provide substantial benefit for databases that must remain available. A high availability database environment co-locates a database across multiple machines, any one of which can assume the functions of the database. In this way, a database doesn’t have a “single point of failure.”

There are many HA strategies and solutions, so how do you choose the best solution among a myriad of options. The first question to ask is “what is the problem you are trying to solve?” The answers boil down to redundancy versus scaling versus high availability. These are not necessarily all the same!

  • Need multiple copies of data in event of a disaster
  • Need to increase read and/or write throughput
  • Need to minimize outage duration

When you are planning your database environment, it’s important to remember the CAP Theorem applies. The CAP Theorem breaks problems into three categories: consistency, availability, and partition tolerance. You can pick any two from those three, at the expense of the third.

  • Consistency. All nodes see the same data at the same time
  • Availability. Every request receives a response about whether it succeeded or not
  • Partition Tolerance. The system continues to operate despite arbitrary partitioning due to network failures

Whatever solution you choose, it should maximize consistency. The problem is that although MySQL replication is great, it alone does not guarantee consistency across all nodes. There is always the potential that data is out of sync, since transactions can be lost during failover and other reasons. Galera-based clusters such as Percona XtraDB Cluster are certification-based to prevent this!

Data loss

The first question you should ask yourself is “Can I afford to lose data?”

This often depends on the application. Apps should check status codes on transactions to be sure they were committed.  Many do not! It is also possible to lose transactions during a failover. During failover, simple replication schemes have the possibility of losing data

Inconsistent nodes are another problem. Without conflict detection and resolution, inconsistent nodes are unavoidable. One solution is to run pt-table-checksum often to check for inconsistent data across replication nodes. Another option is using a Galera-based Distributed Cluster, such as Percona XtraDB Cluster, with a certification process.

Avoiding a single point of failure

What is watching your system? Or is anything standing ready to intervene in a failure? For replication, take a look at MHA and MySQL Orchestrator.  Both are great tools to perform failover of a Replica.  There are others.

For Percona XtraDB Cluster, failover is typically much faster, but it is not the perfect solution in every case.

Can I afford lost transactions?

Many MySQL DBAs worry about setting innodb_flush_log_at_trx_commit to 1 for ACID compliance and sync_binlog, but then use replication with no consistency checks! Is this logically consistent? Percona XtraDB Cluster maintains consistency through certification.

Conflict detection and resolution

All solutions must have some means of conflict detection and resolutions. Galera’s certification process follows the following method:

  • Transaction continues on a node as normal until it reaches COMMIT stage
  • Changes are collected into a writeset
  • Writeset is sent to all nodes for certification
  • PKs are used to determine if the writeset can be applied
  • If certification fails, the writeset is dropped and the transaction is rolled back.
  • If it succeeds, the transaction commits and the writesets are applied to all of the nodes.
  • All nodes will reach the same decision on every transaction and is thus deterministic.
Do I want Failover or a Distributed System?

Another important consideration is whether you should have a failover or a distributed system. A failover system runs one instance at a time, and “fails over” to a different instance when an issue occurs. A distributed system runs several instances at one time, all handling different data.

  • Failover pitfalls:
    • Failover systems have a monitor which detects failed nodes and moves services elsewhere if available
    • Failover takes time!
  • Distributed systems:
    • Distributed systems minimize failover time

Another question is should your failover be automatic or manual?

  • Advantage of Manual Failover
    • The primary advantage to failing over manually is that a human usually can make a better decision as to whether failover is necessary.
    • Systems rarely get it perfect, but they can be close!
  • Advantage of Automatic Failover
    • More Nines due to minimized outages
    • No need to wait on a DBA to perform

A further question is how fast does failover have to occur? Obviously, the faster it happens, the less time there is for potential data loss.

  • Replication / MHA / MMM
    • Depends on how long it takes for pending Replica transactions to complete before failover can occur
    • Typically around 30 seconds
  • DRBD
    • Typically between 15 and 30 seconds
  • Percona XtraDB Cluster / MySQL Cluster
    • VERY fast failover. Typically less than 1 second depending upon Load Balancer
How many 9’s do you really need?

The “9” measure of accuracy is a standard for how perfect a system is. When it comes to “how many 9s,” each 9 is an order of magnitude more accurate. 99.99 is four nines, while 99.999 is five nines.

Every manager response to the question of how many nines is always “As many as I can get.” That sounds great, but the reality is that tradeoffs are required! Many applications can tolerate a few minutes of downtime with minimal impact. The following tables shows downtime as correlated to each “9”:

Do I need to scale reads and/or writes?

When looking at your environment, it’s important to understand your workload. Is your workload heavy on reads, writes, or both? Know whether you’re going to need to scale reads or writes is important to choosing your HA solution:

  • Scaling reads
    • Most solutions offer ability to read from multiple nodes or replicas
    • MHA, Percona XtraDB Cluster, MySQL Cluster, and others are well suited for this
  • Scaling writes
    • Many people wrongly try to scale writes by writing to multiple nodes in Percona XtraDB Cluster leading to conflicts
    • Others try it with Master-Master Replication which Is also problematic
    • Possibly the best solution in this regard is MySQL Cluster

What about provisioning new nodes?

  • Replication
    • Largely, this is a manual process
    • MySQL Utilities makes this easier than ever
  • Distributed Clusters
The rule of threes

With Percona XtraDB Cluster, try to have three of everything. If you span a data center, have three data centers. If your nodes are on a switch, try to have three switches.

Percona XtraDB Cluster needs at least three nodes in the cluster.  An odd number is preferred for voting reasons. Forget about trying to keep a cluster alive during failure with only two data centers.  You are better off making one a DR site. Forget about custom weighting to try to get by on two data centers.  The 51% rule will get you anyway!

How many data centers do I have?

Knowing how many data centers are involved in your environment is a critical factor. Running multiple data centers has implications for the HA solution you adopt.

What if I only have one data center? You can gain protection against a single failed node or more, depending on cluster size. If you have two data centers, you should probably be considering the second data center as a DR solution. Having three or more data centers is the most robust solution when using Galera-based clusters such as XPercona XtraDB Cluster.

How do I plan for disaster recovery?

Planning for disaster recovery is crucial in your HA environment. Make sure the DR node(s) can handle the traffic, if even at a minimized performance level.

  • Replicating from a Percona XtraDB Cluster to a DR site
    • Asynchronous Replication from Percona XtraDB Cluster to a single node
    • Asynchronous Replication from Percona XtraDB Cluster to a replication topology
    • Asynchronous Replication from Percona XtraDB Cluster to another Percona XtraDB Cluster
What storage engine(s) do I need?

Nowadays especially, there is a multitude of storage engines available for a database environment. Which one should you use for your HA solution? Your solution will help determine which storage engine you can employ.

  • Not storage engine dependent. Works with all storage engines
  • Percona XtraDB Cluster. Requires InnoDB. Support for MyISAM is experimental and should not be used in Production
  • MySQL Cluster. Requires NDB Storage Engine
Load balancer options

Load balancers provide a means to distribute your workload across your environment resources so as not to create a bottleneck at any one particular point. The following are some load balancing options:

  • HAProxy
    • Open-source software solution
    • Cannot split reads and writes. If that is a requirement, the app will need to do it!
  • F5 BigIP
    • Typical hardware solution
  • MaxScale
    • Can do read/write splitting
  • Elastic Load Balancer (ELB)
    • Amazon solution
What happens if the cluster reboots?

Some changes require that the cluster be rebooted for the changes to be applied. For example, changing a parameter value in a parameter group is only applied to the cluster after the cluster is rebooted. A cluster could also reboot due to power interruption or other technology failures.

  • A power outage in a single data center could lead to issues
    • Percona XtraDB cluster can be configured to auto bootstrap
    • May not always work when all nodes lose power simultaneously. While server is running, the grastate.dat file shows -1 for seqno
  • Surviving a Reboot
    • Helpful if nodes are shutdown by a System Administrator for a reboot or other such process
    • Normal shutdown sets seqno properly
Do I need to be able to read after writing?

Asynchronous Replication does not guarantee consistent views of data across nodes. Percona XtraDB Cluster offers causal reads. Replica will wait for the event to be applied before processing additional queries, guaranteeing a consistent read state across nodes.

What if I do a lot of data loading?

In the recent past, it was conventional wisdom to use replication in such scenarios over Percona XtraDB Cluster.  MTS does help if data is distributed over multiple schemas but is not a fit for all situations. Percona XtraDB Cluster is now a viable option since we discovered a bug in Galera which did not properly split large transactions.

Have I taken precautions against split brain?

Split Brain occurs when a cluster has its nodes divided from one another, most often due to network blip, and nodes form two or more new and independent (and thus divergent) clusters. XPercona XtraDB Cluster is configured to go into a non-primary state and refuse to take traffic. A newer setting with XtraDB Cluster will allow for dirty reads for non-primary nodes

Does my application require high concurrency?

Newer approaches to replication allow for parallel threads (Percona XtraDB Cluster has had this from the beginning), such as Multi-Thread Slaves (MTS). MTS allows a replica to have multiple SQL threads all with their own relay logs. It enable GTID to make backups via Percona XTRABackup safer due to not being able to trust SHOW SLAVE STATUS to get relay log position.

Am I limited on RAM?

Some Distributed solutions such as MySQL Cluster require a lot of RAM, even with file-based tables.  Be sure to plan appropriately. XtraDB Cluster works much more like a stand-alone node.

How stable is my network?

Networks are never really 100% reliable. Some “Network Problems” are due to outside factors such as system resource contention (especially on virtual machines). Network problems cause inappropriate failover issues. Use LAN segments with Percona XtraDB Cluster to minimize network traffic across the WAN.

Conclusion

Making the right choice depends on:

  • Knowing what you really need!
  • Knowing your options.
  • Knowing your constraints!
  • Understanding the pros/cons of each solution
  • Setting expectations properly!

For more information on how to plan your HA environment, and what tools are available, sign up for my webinar Choosing a MySQL® High Availability Solution today on June 23, 2016 at 10:00 am. You can also get some great insights by watching these videos on our high availability video playlist.

Categories: MySQL

Severe performance regression in MySQL 5.7 crash recovery

MySQL Performance Blog - Tue, 2016-06-07 13:01

In this post, we’ll discuss some insight I’ve gained regarding severe performance regression in MySQL 5.7 crash recovery.

Working on different InnoDB log file sizes in my previous post:

What is a big innodb_log_file_size?

I tried to understand how we can make InnoDB crash recovery faster, but found a rather surprising 5.7 crash recovery regression.

Basically, crash recovery in MySQL 5.7 is two times slower, due to this issue: https://bugs.mysql.com/bug.php?id=80788. InnoDB now performs the log scan twice, compared to a single scan in MySQL 5.6 (no surprise that there is performance degradation).

Fortunately, there is a proposed patch for MySQL 5.7, so I hope it will be improved soon.

As for general crash recovery improvement, my opinion is that it would be much improved by making it multi-threaded. Right now it is significantly limited by the single thread that reads and processes log entries one-by-one. With the current hardware, consisting of tens of cores and fast SSD, we can improve crash recovery by utilizing all the resources we have.

One small improvement that can be made is to disable PERFORMANCE_SCHEMA during recovery (these stats are not needed anyway).

Categories: MySQL

Webinar Thursday, June 9: Troubleshooting MySQL configuration issues

MySQL Performance Blog - Mon, 2016-06-06 20:24

Please join us on Thursday June 9, at 10:00 am PDT (UTC-7) for the webinar Troubleshooting MySQL configuration issues.

MySQL Server is highly tunable. It has hundreds of configuration options which provide great tuning abilities and, at the same time, can be the source of various issues.

In this webinar you will learn which types of options MySQL Server supports, when they take effect and how to modify configuration safely. I will demonstrate best practices and tricks, used by Support engineers when they work with bug reports and customer issues which highly depend on configuration.

Register now.

Sveta Smirnova Principal Technical Services Engineer Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that help solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.
Categories: MySQL

Percona Server 5.7.12-5 is now available

MySQL Performance Blog - Mon, 2016-06-06 19:23

Percona is glad to announce the GA release of Percona Server 5.7.12-5 on June 6, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.7.12, including all the bug fixes in it, Percona Server 5.7.12-5 is the current GA release in the Percona Server 5.7 series. All of Percona’s software is open-source and free, all the details of the release can be found in the 5.7.12-5 milestone at Launchpad.

Bugs Fixed:

  • MEMORY storage engine did not support JSON columns. Bug fixed #1536469.
  • When Read Free Replication was enabled for TokuDB and there was no explicit primary key for the replicated TokuDB table there could be duplicated records in the table on update operation. The fix disables Read Free Replication for tables without explicit primary key and does rows lookup for UPDATE and DELETE binary log events and issues warning. Bug fixed #1536663 (#950).
  • Attempting to execute a non-existing prepared statement with Response Time Distribution plugin enabled could lead to a server crash. Bug fixed #1538019.
  • TokuDB was using using different memory allocators, this was causing safemalloc warnings in debug builds and crashes because memory accounting didn’t add up. Bug fixed #1546538 (#962).
  • Adding an index to an InnoDB temporary table while expand_fast_index_creation was enabled could lead to server assertion. Bug fixed #1554622.
  • Percona Server was missing the innodb_numa_interleave server variable. Bug fixed #1561091 (upstream #80288).
  • Running SHOW STATUS in parallel to online buffer pool resizing could lead to server crash. Bug fixed #1577282.
  • InnoDB crash recovery might fail if innodb_flush_method was set to ALL_O_DIRECT. Bug fixed #1529885.
  • Fixed heap allocator/deallocator mismatch in Metrics for scalability measurement. Bug fixed #1581051.
  • Percona Server is now built with system zlib library instead of the older bundled one. Bug fixed #1108016.
  • CMake would fail if TokuDB tests passed. Bug fixed #1521566.
  • Reduced the memory overhead per page in the InnoDB buffer pool. The fix was based on Facebook patch #91e979e. Bug fixed #1536693 (upstream #72466).
  • CREATE TABLE ... LIKE ... could create a system table with an unsupported enforced engine. Bug fixed #1540338.
  • Change buffer merge could throttle to 5% of I/O capacity on an idle server. Bug fixed #1547525.
  • Parallel doublewrite memory was not freed with innodb_fast_shutdown was set to 2. Bug fixed #1578139.
  • Server will now show more descriptive error message when Percona Server fails with errno == 22 "Invalid argument", if innodb_flush_method was set to ALL_O_DIRECT. Bug fixed #1578604.
  • The error log warning Too many connections was only printed for connection attempts when max_connections + one SUPER have connected. If the extra SUPER is not connected, the warning was not printed for a non-SUPER connection attempt. Bug fixed #1583553.
  • apt-cache show command for percona-server-client was showing innotop included as part of the package. Bug fixed #1201074.
  • A replication slave would fail to connect to a master running 5.5. Bug fixed #1566642 (upstream #80962).
  • Upgrade logic for figuring if TokuDB upgrade can be performed from the version on disk to the current version was broken due to regression introduced when fixing bug #684 in Percona Server 5.7.11-4. Bug fixed #717.
  • Fixed jemalloc version parsing error. Bug fixed #528.
  • If ALTER TABLE was run while tokudb_auto_analyze variable was enabled it would trigger auto-analysis, which could lead to a server crash if ALTER TABLE DROP KEY was used because it would be operating on the old table/key meta-data. Bug fixed #945.
  • The tokudb_pk_insert_mode session variable has been deprecated and the behavior will be that of the former tokudb_pk_insert_mode set to 1. The optimization will be used where safe and not used where not safe. Bug fixed #952.
  • Bug in TokuDB Index Condition Pushdown was causing ORDER BY DESC to reverse the scan outside of the WHERE bounds. This would cause query to hang in a sending data state for several minutes in some environments with large amounts of data (3 billion records) if the ORDER BY DESC statement was used. Bugs fixed #988, #233, and #534.

Other bugs fixed: #1510564 (upstream #78981), #1533482 (upstream #79999), #1553166, #1496282 (#964), #1496786 (#956), #1566790, #718, #914, #937, #954, #955, #970, #971, #972, #976, #977, #981, #982, #637, and #982.

NOTE: X Plugin is a part of the MySQL community server so it’s part of the Percona Server 5.7.12-5 release as well. But users will need to install MySQL Shell from official MySQL repositories.

Release notes for Percona Server 5.7.12-5 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Categories: MySQL

EL5 and why we’ve had to enable TLSv1.0 again

MySQL Performance Blog - Mon, 2016-06-06 13:52

We have had to revert back to TLSv1.0.

If you saw my previous post on TLSv1.0 (https://www.percona.com/blog/2016/05/23/percona-disabling-tlsv1-0-may-31st-2016/), you’ll know I  wanted to deprecate TLSv1.0 well ahead of PCI’s changes. We made the changes May 31st.

Unfortunately, it has become apparent that EL 5, which is in the final phases of End Of Life, does not support TLSv1.1 or TLSv1.2. As such, I have had to re-enable TLSv1.0 support so that these users employing EL 5 can still receive updates from our repositories.

If you are running EL 5 (RHEL 5 / CentOS 5 / Scientific Linux 5 / etc …), I encourage you to update as soon as possible. As of March 31st, 2017 there will be no more updates at all, and at present EL 5 is effectively receiving very few updates. It also has known vulnerabilities.

Removal of TLSv1.0 support will now take place March 31st, 2017. If there are any EL 5 backports that bring support for TLSv1.1 / TLSv1.2 in the interim, I will seek to remove support earlier.

 

Categories: MySQL

Percona Server for MongoDB 3.2.6-1.0 is now available

MySQL Performance Blog - Fri, 2016-06-03 19:49

Percona is pleased to announce the release of Percona Server for MongoDB 3.2.6-1.0 on June 3, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.2.6-1.0 is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. Based on MongoDB 3.2.6, it extends MongoDB with MongoRocks and PerconaFT storage engines, as well as enterprise-grade features like external authentication and audit logging at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: The PerconaFT storage engine has been deprecated and will not be available in future releases.

This release includes all changes from MongoDB 3.2.6 as well as the following:

  • PerconaFT has been deprecated. It is still available, but is no longer recommended for production use. PerconaFT will be removed in future releases.
  • MongoRocks is no longer considered experimental and is now recommended for production.

For more information, see the Embracing MongoRocks blog entry.

The release notes are available in the official documentation.

 

Categories: MySQL

MySQL 5.7 By Default 1/3rd Slower Than 5.6 When Using Binary Logs

MySQL Performance Blog - Fri, 2016-06-03 16:11

Researching a performance issue, we came to a startling discovery:

MySQL 5.7 + binlogs is by default 37-45% slower than MySQL 5.6 + binlogs when otherwise using the default MySQL settings

Test server MySQL versions used:
i7, 8 threads, SSD, Centos 7.2.1511
mysql-5.6.30-linux-glibc2.5-x86_64
mysql-5.7.12-linux-glibc2.5-x86_64

mysqld –options:
--no-defaults --log-bin=mysql-bin --server-id=2

Run details:
Sysbench version 0.5, 4 threads, socket file connection

Sysbench Prepare: 

sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-auto-inc=off --mysql-engine-trx=yes --mysql-table-engine=innodb --oltp_table_size=1000000 --oltp_tables_count=1 --mysql-db=test --mysql-user=root --db-driver=mysql --mysql-socket=/path_to_socket_file/your_socket_file.sock prepare

Sysbench Run:

sysbench --report-interval=10 --oltp-auto-inc=off --max-time=50 --max-requests=0 --mysql-engine-trx=yes --test=/usr/share/doc/sysbench/tests/db/oltp.lua --init-rng=on --oltp_index_updates=10 --oltp_non_index_updates=10 --oltp_distinct_ranges=15 --oltp_order_ranges=15 --oltp_tables_count=1 --num-threads=4 --oltp_table_size=1000000 --mysql-db=test --mysql-user=root --db-driver=mysql --mysql-socket=/path_to_socket_file/your_socket_file.sock run

Results:

5.6.30: transactions: 7483 (149.60 per sec.)
5.7.12: transactions: 4689 (93.71 per sec.)  — That is a 37.36% decrease!

Note: on high-end systems with premium IO (think Fusion-IO, memory-only, high-end SSD with good caching throughput), the difference would be much smaller or negligible.

The reason?

A helpful comment from Shane Bester on a related bug report made me realize what was happening. Note the following in the MySQL Manual:

“Prior to MySQL 5.7.7, the default value of sync_binlog was 0, which configures no synchronizing to disk—in this case, the server relies on the operating system to flush the binary log’s contents from time to time as for any other file. MySQL 5.7.7 and later use a default value of 1, which is the safest choice, but as noted above can impact performance.” — https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_sync_binlog

The culprit is thus the --sync_binlog=1 change which was made in 5.7.7 (in 5.6 it is 0 by default). While this may indeed be “the safest choice,” one has to wonder why Oracle chose to implement this default change in 5.7.7. After all, there are many other options t aid crash safety.

A related blog post  from the MySQL HA team states;

“Indeed, [with sync_binlog=1,] it increases the total number of fsyncs called, but since MySQL 5.6, the server groups transactions and fsync’s them together, which minimizes greatly a potential performance hit.” — http://mysqlhighavailability.com/replication-defaults-in-mysql-5-7-7/ (ref item #4)

This seems incorrect given our findings, unless perhaps it requires tuning some other option.

This raises some actions points/questions for Oracle’s team: why change this now? Was 5.6 never crash-safe in terms of binary logging? How about other options that aid crash safety? Is anything [before 5.7.7] really ACID compliant by default?

In 2009 my colleague Peter Zaitsev had already posted on performance matters in connection with sync_binlog issues. More than seven years later, the questions asked in his post may still be valid today;

“May be opening binlog with O_DSYNC flag if sync_binlog=1 instead of using fsync will help? Or may be binlog pre-allocation would be good solution.” — PZ

Testing the same setup again, but this time with sync_binlog=0  and sync_binlog=1  synchronized/setup on both servers, we see;

Results for sync_binlog=0:

5.6.30: transactions: 7472 (149.38 per sec.)
5.7.12: transactions: 6594 (131.86 per sec.)  — A 11.73% decrease

Results for sync_binlog=1:

5.6.30: transactions: 3854 (77.03 per sec.)
5.7.12: transactions: 4597 (91.89 per sec.)  — A 19.29% increase

Note: the increase here is to some extent negated by the fact that enabling sync_binlog is overall still causes a significant (30% on 5.7 and 48% on 5.6) performance drop. Also interesting is that this could be the effect of “tuning the defaults” of/in 5.7, and it also makes one think about the possibility o further defaults tuning/optimization in this area.

Results for sync_binlog=100:

5.6.30: transactions: 7564 (151.12 per sec.)
5.7.12: transactions: 6515 (130.22 per sec.) — A 13.83% decrease

Thus, while 5.7.12 made some improvements when it comes to --sync_binlog=1, when --sync_binlog is turned off or is set to 100, we still see a ~11% decrease in performance. This is the same when not using binary logging at all, as a test with only --no-defaults  (i.e. 100% vanilla out-of-the-box MySQL 5.6.30 versus MySQL 5.7.12) shows;

Results without binlogs enabled:

5.6.30: transactions: 7891 (157.77 per sec.)
5.7.12: transactions: 6963 (139.22 per sec.)  — A 11.76% decrease

This raises another question for Oracle’s team: with four threads, there is a ~11% decrease in performance for 5.7.12 versus 5.6.30 (both vanilla)?

Discussing this internally, we were interested to see whether the arbitrary low number of four threads skewed the results and perhaps only showed a less realistic use case. However, testing with more threads, the numbers became worse still:

Results with 100 threads:

5.6.30. transactions: 20216 (398.89 per sec.)
5.7.12. transactions: 11097 (218.43 per sec.) — A 45.24% decrease

Results with 150 threads:

5.6.30. transactions: 11852 (233.01 per sec.)
5.7.12. transactions: 6606 (129.80 per sec.) — A 44.29% decrease

The findings in this article were compiled from a group effort.

Categories: MySQL

Galera warning “last inactive check”

MySQL Performance Blog - Thu, 2016-06-02 18:51

In this post, we’ll discuss the Galera warning “last inactive check” and what it means.

Problem

I’ve been working with Percona XtraDB Cluster quite a bit recently, and have been investigating various warnings. I came across this one today:

[Warning] WSREP: last inactive check more than PT1.5S ago (PT1.51811S), skipping check

This warning is related to the evs.inactive_check_period option. This option controls the poll period for the group communication response time. If a node is delayed, it is added to a delay list and it can lead to the cluster evicting the node.

Possible Cause

While some troubleshooting tips seem to associate the warning with VMWare snapshots, this isn’t the case here, as we see the warning on a physical machine.

I checked for backups or desynced nodes, and this also wasn’t the case. The warning was not accompanied by any errors or other information, so there was nothing critical happening.

In the troubleshooting link above, Galera developers said:

This can be seen on bare metal as well — with poorly configured mysqld, O/S, or simply being overloaded. All it means is that this thread could not get CPU time for 7.1 seconds. You can imagine that access to resources in virtual machines is even harder (especially I/O) than on bare metal, so you will see this in virtual machines more often.

This is not a Galera specific issue (it just reports being stuck, other mysqld threads are equally stuck) so there is no configuration options for that. You simply must make sure that your system and mysqld are properly configured, that there is enough RAM (buffer pool not over provisioned), that there is swap, that there are proper I/O drivers installed on guest and so on.

Basically, Galera runs in virtual machines as well as the virtual machines approximates bare metal.

It could also be an indication of unstable network or just higher average network latency than expected by the default configuration. In addition to checking network, do check I/O, swap and memory when you do see this warning.

Our graphs and counters otherwise look healthy. If this is the case, this is most likely nothing to worry about.

It is also a good idea to ensure your nodes are desynced before backup. Look for spikes in your workload. A further option to check for is that swappiness is set to 1 on modern kernels.

If all of this looks good, ensure the servers are all talking to the same NTP server, have the same time zone and the times and dates are in sync. While this warning could be a sign of an overloaded system, if everything else looks good this warning isn’t something to worry about.

Source

The warning comes from evs_proto.cpp in the Galera code:

if (last_inactive_check_ + inactive_check_period_*3 < now)
{
log_warn << "last inactive check more than " << inactive_check_period_*3
<< " ago (" << (now - last_inactive_check_)
<< "), skipping check";
last_inactive_check_ = now;
return;
}

Since the default for inactive_check_period is one second according to the Galera documentation, if it is now later than three seconds after the last check, it skips the rest of the above routine and adds the node to the delay list and does some other logic. The reason it does this is that it doesn’t want to rely on stale counters before making decisions. The message is really just letting you know that.

In Percona XtraDB Cluster, this setting defaults to 0.5s. This warning simply could be that your inactive_check_period is too low, and the delay is not high enough to add the node to the delay list. So you could consider increasing evs.inactive_check_period to resolve the warnings. (Apparently in Galera, it may also now be 0.5s but documentation is stale.)

Possible Solution

To find a sane value my colleague David Bennett came up with this command line, which gives you an idea of when your check warnings are happening:

$ cat mysqld.log | grep 'last inactive check more than' | perl -ne 'm/(PT(.*)S)/; print $1."n"' | sort -n | uniq -c
1 1.55228
1 1.5523
1 1.55257
1 1.55345
1 1.55363
1 1.5543
1 1.55436
1 1.55483
1 1.5552
1 1.55582

Therefore, in this case, it may be a good idea to set inactive_check_period at 1 or 1.5 to make the warnings go away.

Conclusion

Each node in the cluster keeps its own local copy of how it sees the topology of the entire cluster. check_inactive is a node event that is triggered every inactive_check_period seconds to help the node update its view of the whole cluster, and ensure it is accurate. Service messages can be broadcast to the cluster informing nodes of changes to the topology. For example, if a cluster node is going down it will broadcast a service message telling each node in the cluster to remove it. The action is queued but the actual view of the cluster is updated with check_inactive. This is why it adds nodes to its local copy of inactive, suspect and delayed nodes.

If a node thinks it might be looking at stale data, it doesn’t make these decisions and waits until the next time for a fresh queue. Unfortunately, if inactive_check_period is too low, it will keep giving you warnings.

Categories: MySQL

Percona Live Europe call for papers is now open!

MySQL Performance Blog - Thu, 2016-06-02 18:13

The Percona Live Europe 2016 call for papers is now officially open! We’re looking forward to seeing you in Amsterdam this October 3-5, and hearing your speak! Ask yourself “Do I have…”:

  • Fresh ideas?
  • Enlightening case studies?
  • Insight on best practices?
  • In-depth technical knowledge?

If the answer to any of these is “YES,” then you need to submit your proposal for a chance to speak at Percona Live Europe 2016. Speaking is a great way to broaden not only the awareness of your company with an intelligent and engaged audience of software architects, senior engineers and developers, but also your own!

The deadline to submit is July 18th, 2016.

Tracks/Topics: 

Percona Live Europe 2016 is looking for topics for Breakout Sessions, Tutorial Sessions, and Lightning Talks:

  • Breakout Session. Submissions should be detailed and clearly indicate the topic and content of your proposal for the Conference Committee. Sessions should either be 25 minutes or 50 minutes in length, including Q&A.
  • Tutorial Session. Submissions should be detailed and include an agenda for review by the Conference Committee. Tutorial sessions should present immediate and practical applications of in-depth knowledge of MySQL, NoSQL, or Data in the Cloud technologies. They should be presented at a level between a training class and a conference breakout session. Attendees are expected to have their laptops to work through detailed and potentially hands-on presentations. Tutorials will be 3 hours in length including Q&A. If you would like to submit your proposal as a full day, six-hour tutorial, please indicate this in your submission.
  • Lightning Talks. Lightning talks are five-minute presentations focusing on one key point that will be of interest to the community. Talks can be technical, lighthearted, fun or otherwise entertaining submissions. These can include new ideas, a successful project, a cautionary story, quick tip or demonstration. This session is an opportunity for ideas to get the attention they deserve. The rules for this session are easy: five minutes and only five minutes. Use this time wisely to present the pertinent message of the subject matter and have fun doing so!

This year, the conference will feature a variety of formal tracks.We are particularly interested in proposals that fit into the areas of MySQL, MongoDB, NoSQL, ODBMS and Data in the Cloud. Our conference tracks are:

  • Analytics
  • Architecture/Design
  • Big Data
  • Case Stories
  • Development
  • New and Trending Topics
  • Operations and Management
  • Scalability/Performance
  • Security

Action Items:

With the call for papers ending July 18th, 2016the time to submit is now! Here’s what you need to submit:

1) Pull together your proposal information:

-Talk title
-Abstract
-Speaker bio, headshot, video

2) Submit your proposal here before July 18th: SUBMIT

3) After submitting, you will be given a direct link and badge to advertise to the community. Tweet, Facebook, blog – get the word out about your submission and get your followers to vote!

That’s all it takes! We’re looking forward to your submissions!

Sponsoring / Super Saver Tickets! 

Interested in sponsoring? Take advantage of early sponsorship opportunities to reach the influential Percona Live audience. Sponsorship Information

Want to attend at the cheapest rate possible? Super Saver tickets are on sale NOW! Register today.

Categories: MySQL

Why use provisioned IOPS volumes for AWS databases?

MySQL Performance Blog - Wed, 2016-06-01 22:05

In this blog, we’ll use some test results to look at the rationale for using provisioned IOPS volumes for AWS databases.

One piece of advice you often hear running MySQL, MongoDB or other databases in the AWS EC2 environment is that you should use volumes with provisioned IOPs. This kind of makes sense on the “marketing” level, where provisioned IOPS (io1) volumes are designed for IO-intensive database workloads, while General Purpose (gp2) volumes are not. But if you go to the AWS volume type description, you will find that gp2s are shown to have pretty good IO performance. So where do all these supposed database performance problems for Amazon Elastic Block Store (EBS), with no provisioned IOs, come from?

Here is what I found out running experiments with a beta of Percona Monitoring and Management.

I ran a typical database instance workload, where the OLTP workload uses around 20% of the system capacity, and periodically I have a single user IO intensive batch job hitting the same system. Even if you do not have batch jobs running, your backup is likely to show this same IO pattern.

What would happen in this case if you have conventional local storage? Some queueing happens on the storage level, but as there is only one user with intensive IO, the impact is typically not very significant. What do we see from the AWS gp2 volume?

At first, the read services spike to more than 1.5K IOPS, and while latency increases from normal 1-2ms, it remains below 10ms on average. However, after a couple of minutes IOPS drops to around 500, and read latency spikes to over 100ms (note the log scale on the graph).

What is happening here? The gp2 volumes behave differently than your conventional storage by allowing IO bursts for short periods of time – after a short period of time, however, the IOs are throttled (in this case to only 500/sec). How does the throttling work? By adding delay to IO completion so that only the required IOs are completed per second, and the more concurrency we add to such throttled devices, the higher the average IO response latency is!

What does this mean from an application point of view? Let’s say you have a database transaction that requires 100 reads from the disk. If you have an average of 1ms latency, this transaction takes about 100ms reading from the disk, and will likely be seen as very good user experience. If you have an average IO latency of 100ms, the same transaction spends ten seconds reading from the disk – well above the tolerance for many users.   

As a DBA, you can see how putting an extra (small) load on the database system (such as running batch job or backup) can cause your boss to come screaming that the website is down ten minutes later.

There is another key difference between conventional local storage such as RAID or SSD, and an EBS volume. Not all local storage IO is created equal, while an EBS general purpose volume seems to inject latencies into IO operations independent of what the IO is.

Transactional log flushes are one of the most latency critical IO operations databases perform. These are very small (often just 1 page) sequential writes. RAID controllers and SSDs can handle these very quickly by only writing in memory (battery or capacitor backed up), at a fraction of the costs of other operations. This is not the case for EBS gp2: log writes come with high latency.

We can see this latency in Performance Schema graphs, where such patch jobs correlate to a huge amount of time spent writing to the InnoDB Transactional Log file or Binary Log File:

We can also see the main InnoDB thread spending up to 30% of its time flushing the log – the number is drastically lower for typical storage configuration:

Another way AWS EBS storage is different from the typical local storage is that size directly buys you performance. GP2 volumes provide 3 IOPS/GB, up to 10000 IOPS (99 percentile figure),  which means that larger storage will have higher performance – though if anything, this means you’re getting better performance from your larger production volumes than your smaller test ones.

A final note: EBS storage is essentially connected to a network, which means both slightly higher latencies and limited throughput. According to the documentation, there is 160MiB/s throughput limit per volume, which is a lot less than even inexpensive SATA SSD. SSD often can provide 500MB/sec or more, and are generally limited by SATA bus capacity.  

My takeaways from these results:

  • EBS General Purpose volumes have decent performance for light-duty workloads – if you don’t demand a lot of IOPS from your storage for prolonged periods of time. If you do, storage with provisioned IOPS is a better choice
  • Whenever you’re using Amazon or other environments with multi-tenant virtualized storage, I would highly suggest running some benchmark on how it behaves for the above scenarios. The assumptions you have about your conventional RAID or SSD storage might not apply.

Want to play around with live graphs? Check out our PMM Demo, which is currently running the stated workload on Amazon EC2. You can also install the beta version to use with your own system.

 

Categories: MySQL

Troubleshooting locking issues webinar: Q & A

MySQL Performance Blog - Tue, 2016-05-31 23:58

In this blog, I will provide answers to the Q & A for the Troubleshooting locking issues webinar.

First, I want to thank you for attending the May, 12 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: Do you have the links to those other info sources?

A: Yes, they are listed in the “More Information” slide. In the PDF, all the links are active. If you speak Russian, you can also check this presentation by Dmitry Lenev. He also did a similar presentation in English for MySQL Connect, but now all the content is gone from the official website, so only chance to find his slides in English is to search web archives.

Q: Are you going to discuss metadata locks?

A: Yes. I discuss them in slides 11-16.

Q: Why do row locking when table level lock is already set by InnoDB. My question was table level lock is already set. You update 100 rows in that table, but InnoDB locks these 100 rows. Why? The table is already locked . . .

A: Table lock, which you saw on slide #20, is set by InnoDB only for a short time and almost immediately released. But the transaction not closed yet, and InnoDB still needs to protect updated rows from modifications by other transactions. Why can’t it be done with table-level lock only? Imagine you have a table with 1,000,000 rows. All have an ID from 1 to 1,000,000 (and other fields). Now imagine you need to update the row with ID=1. In the case of table lock, the whole table is locked while you are performing this one update. If another connection wants to update a row with ID=202, it has to wait. In the case of row-level locks, the two queries do not interfere each other and can apply in parallel.

Q: How do you avoid locks on alter, without resetting that transaction?

A: If you are using version 5.6 and up, many ALTER commands are non-locking. See the overview of online DDL in the user manual. However, if you want to use an ALTER variation that cannot be done online, you can use the utility pt-online-schema-change from Percona Toolkit. Note that ALTER will take longer than the regular “blocking” variant, but it will not block your other connections.

Q: This is not a question, but there is a typo on the slide – it should be Intention Locks, not Intension Locks

A: Thank you! I fixed this and the wrong table name in slide #28. Please download updated version of slides.

Q: Why does the ALTER table operation have to wait forever? It should start once the transaction finished, but I know that the lock will remain. Why doesn’t it unlock when the transaction is finished?

A: Of course it doesn’t wait forever! It was just an acronym for “waits very long time,” which can happen if you have a very busy application, with many threads updating the same table. Or if you don’t close transactions.

Q: Is the metadata_locks table enabled by default?

A: Yes.

Categories: MySQL

What is a big innodb_log_file_size?

MySQL Performance Blog - Tue, 2016-05-31 15:45

In this post, we’ll discuss what constitutes a big innodb_log_file_size, and how it can affect performance.

In the comments for our post on Percona Server 5.7 performance improvements, someone asked why we use innodb_log_file_size=10G with an indication that it might be too big?

In my previous post (https://www.percona.com/blog/2016/05/17/mysql-5-7-read-write-benchmarks/), the example used innodb_log_file_size=15G. Is that too big? Let’s take a more detailed look at this.

First, let me start by rephrasing my warning: the log file size should be set as big as possible, but not bigger than necessary. A bigger log file size is better for performance, but it has a drawback (a significant one) that you need to worry about: the recovery time after a crash. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. This limitation can translate to a 20x longer crash recovery process!

But how big is “big enough”? Is it 48MB (the default value), 1-2GB (which I often see in production), or 10-15GB (like we use for benchmarks)?

I wrote about how the innodb_log_file_size is related to background flushing five years ago, and I recommend this post if you are interested in details:

InnoDB Flushing: Theory and solutions

Since that time many improvements have been made both in Percona Server and MySQL, but a small innodb_log_file_size still affects the throughput.

How? Let’s review how writes happen in InnoDB. Practically all data page writes happen in the background. It seems like background writes shouldn’t affect user query performance, but it does. The more intense background writes are, the more resources are taken away from the user foreground workload. There are three big forces that rule background writes:

  1. How close checkpoint age is to the async point (again, see previous material https://www.percona.com/blog/2011/04/04/innodb-flushing-theory-and-solutions/). This is adaptive flushing.
  2. How close is innodb_max_dirty_pages_pct to the percentage of actual dirty pages.  You can see this in the LRU flushing metrics.
  3. What amount of free pages are defined by innodb_lru_scan_depth. This is also in LRU flushing metrics.

So in this equation innodb_log_file_size defines the async point, and how big checkpoint age can be.

To show a practical application of these forces, I’ve provided some chart data. I will use charts from the Percona Monitoring and Management tool and data from Percona Server 5.7.

Before jumping to graphs, let me remind you that the max checkpoint age is defined not only by innodb_log_file_size, but also innodb_log_files_in_group (which is usually “2” by default). So innodb_log_file_size=2GB will have 4GB of log space, from which MySQL will use about 3.24GB (MySQL makes extra reservations to avoid a situation when we fully run out of log space).

Below are graphs from a tpcc-mysql benchmark with 1500 warehouses, which provides about 150GB of data. I used innodb_buffer_pool_size=64GB, and I made two runs:

  1. with innodb_log_file_size=2GB
  2. with innodb_log_file_size=15GB

Other details about my setup:

  • CPU: 56 logical CPU threads servers Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • OS: Ubuntu 16.04
  • Kernel 4.4.0-21-generic
  • The storage device is Samsung SM863 SATA SSD, single device, with ext4 filesystem
  • MySQL versions: Percona Server 5.7.11
  • innodb_io_capacity=5000 / innodb_io_capacity_max=7500

On the first chart, let’s look at the max checkpoint age, current checkpoint age and amount of flushed pages per second:

. . . and also a related graph of how many pages are flushed by different forces (LRU flushing and adaptive flushing). You can receive this data by enabling innodb_monitor_enable = '%'.

From these charts, we can see that with 2GB innodb_log_file_size InnoDB is forced by adaptive flushing to flush (write) more pages, because the current checkpoint age (uncheckpointed bytes) is very close to Max Checkpoint Age. To see the checkpoint age in MySQL, you can use the innodb_metrics table and metrics recovery_log_lsn_checkpoint_age and recovery_log_max_modified_age_sync.

In the case using innodb_log_file_size=15GB, the main flushing is done via LRU flushing (to keep 5000 pages (innodb_lru_scan_depth) free per buffer pool instance). From the first graph we can figure that uncheckpointed bytes never reach 12GB, so in this case using innodb_log_file_size=15GB is overkill. We might be fine with innodb_log_file_size=8GB – but we wouldn’t know unless we set the innodb_log_file_size big enough. MySQL 5.7 comes with a very convenient improvement: now it is much easier to change the innodb_log_file_size, but it still requires a server restart. I wish we could change it online, like we can for innodb_buffer_pool_size (I do not see technical barriers for this).

Let’s also look into the InnoDB buffer pool content:

We can see that there are more modified pages in the case with 15GB log files (which is good, as more modified pages means less work done in the background).

And the most interesting question: how does it affect throughput?

With innodb_log_file_size=2GB, the throughput is about 20% worse. With a 2GB log size, you can see that often zero transactions are processed within one second – this is bad, and says that the flushing algorithm still needs improvements in cases when the checkpoint age is close to or at the async point.

This should make a convincing case that using big innodb_log_file_size is beneficial. In this particular case, probably 8GB (with innodb_log_files_in_group=2) would be enough.

What about the recovery time? To measure this, I killed mysqld when the checkpoint age (uncheckpointed bytes) was about 10GB. It appeared to take 20 mins to start mysqld after the crash. In another experiment with 25GB of uncheckpointed bytes, it took 45 mins. Unfortunately, crash recovery in MySQL is still singlethreaded, so it takes a long time to read and apply 10GB worth of changes (even from the fast storage).

We can see that recovery is single-threaded from the CPU usage chart during recovery:

The system uses 2% of the CPU (which corresponds to a single CPU).

In many cases, crash recovery is not a huge concern. People don’t always have to wait for MySQL recovery – since even one minute of downtime can be too long, often the instance fails over to a slave (especially with async replication), or the crashed node just leaves the cluster (if you use Percona XtraDB Cluster).

I would still like to see improvements in this area. Crash recovery is the biggest showstopper for using a big innodb_log_file_size, and I think it is possible to add parallelism similar to multithreaded slaves into the crash recovery process.

You can find the raw results, scripts and configs here.

 

Categories: MySQL

Galera Error Failed to Report Last Committed (Interrupted System Call)

MySQL Performance Blog - Fri, 2016-05-27 22:00

In this blog, we’ll discuss the ramifications of the Galera Error Failed to Report Last Committed (Interrupted System Call).

I have recently seen this error with Percona XtraDB Cluster (or Galera):

[Warning] WSREP: Failed to report last committed 549684236, -4 (Interrupted system call)

It was posted in launchpad as a bug in 2013: https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1434646

My colleague Przemek replied, and explained it as:

Reporting the last committed transaction is just a part of the certification index purge process. In case it fails for some reason (it occasionally does), the cert index purge may be a little delayed. But it does not mean the transaction was not applied successfully. This is a warning after all.

If we look up this error in the source code, we realize it is reusing Linux system errors. Specifically:

#define EINTR 4 /* Interrupted system call */

As there isn’t much documentation regarding this error, and internet searches did not bring up useful information, my colleague David Bennett and I delved into the source code (as we do on occasion).

If we look in the Galera source code gcs_sm.hpp we see:

289  * @retval -EINTR  - was interrupted by another thread

We also see:

317                 /* was interrupted, will be handled by someone else */

This means that the thread was interrupted, but the server will retry on another thread. As it is just a warning, it isn’t anything to be too concerned about – unless they begin to pile up (which could be a sign of concurrency issues).

The specific warning is thrown from galera_service_thd.cpp here:

58                 if (gu_unlikely(ret < 0))
59                 {
60                     log_warn << "Failed to report last committed "
61                              << data.last_committed_ << ", " << ret
62                              << " (" << strerror (-ret) << ')';
63                     // @todo: figure out what to do in this case
64                 }

This warning could be handled better so as to not bloody the logs, or sound cryptic enough to concern administrators.

Categories: MySQL

Asynchronous Query Execution with MySQL 5.7 X Plugin

MySQL Performance Blog - Fri, 2016-05-27 20:58

In this blog, we will discuss MySQL 5.7 asynchronous query execution using the X Plugin.

Overview

MySQL 5.7 supports X Plugin / X Protocol, which allows (if the library supports it) asynchronous query execution. In 2014, I published a blog on how to increase a slow query performance with the parallel query execution. There, I created a prototype in the bash shell. Here, I’ve tried a similar idea with NodeJS + mysqlx library (which uses MySQL X Plugin).

TL;DR version: By using the MySQL X Plugin with NodeJS I was able to increase query performance 10x (some query rewrite required).

X Protocol and NodeJS

Here are the steps required:

  1. First, we will need to enable X Plugin in MySQL 5.7.12+, which will use a different port (33060 by default).
  2. Second, download and install NodeJS (>4.2) and mysql-connector-nodejs-1.0.2.tar.gz (follow Getting Started with Connector/Node.JS guide).
    # node --version v4.4.4 # wget https://dev.mysql.com/get/Downloads/Connector-Nodejs/mysql-connector-nodejs-1.0.2.tar.gz # npm install mysql-connector-nodejs-1.0.2.tar.gz
    Please note: on older systems, you will probably need to upgrade the nodejs version. Follow the Installing Node.js via package manager guide.
  3. All set! Now we can use the asynchronous queries feature.

Test data 

I’m using the same Wikipedia Page Counts dataset (wikistats) I’ve used for my Apache Spark and MySQL example. Let’s imagine we want to compare the popularity of MySQL versus PostgeSQL in January 2008 (comparing the total page views). Here are the sample queries:

mysql> select sum(tot_visits) from wikistats_by_day_spark where url like '%mysql%'; mysql> select sum(tot_visits) from wikistats_by_day_spark where url like '%postgresql%';

The table size only holds data for English Wikipedia for January 2008, but still has ~200M rows and ~16G in size. Both queries run for ~5 minutes each, and utilize only one CPU core (one connection = one CPU core). The box has 24 CPU cores, Intel(R) Xeon(R) CPU L5639 @ 2.13GHz. Can we run the query in parallel, utilizing all cores?

That is possible now with NodeJS and X Plugin, but require some preparation:

  1. Partition the table using hash, 24 partitions:
    CREATE TABLE `wikistats_by_day_spark_part` ( `id` int(11) NOT NULL AUTO_INCREMENT, `mydate` date NOT NULL, `url` text, `cnt` bigint(20) NOT NULL, `tot_visits` bigint(20) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=239863472 DEFAULT CHARSET=latin1 /*!50100 PARTITION BY HASH (id) PARTITIONS 24 */
  2. Rewrite the query running one connection (= one thread) per each partition, choosing its own partition for each thread:
    select sum(tot_visits) from wikistats_by_day_spark_part partition (p<N>) where url like '%mysql%';
  3. Wrap it up inside the NodeJS Callback functions / Promises.

The code

var mysqlx = require('mysqlx'); var cs_pre = { host: 'localhost', port: 33060, dbUser: 'root', dbPassword: 'mysql' }; var cs = { host: 'localhost', port: 33060, dbUser: 'root', dbPassword: 'mysql' }; var partitions = []; var res = []; var total = 0; mysqlx.getNodeSession( cs_pre ).then(session_pre => { var sql="select partition_name from information_schema.partitions where table_name = 'wikistats_by_day_spark_part' and table_schema = 'wikistats' "; session_pre.executeSql(sql) .execute(function (row) { partitions.push(row); }).catch(err => { console.log(err); }) .then( function () { partitions.forEach(function(p) { mysqlx.getNodeSession( cs ).then(session => { var sql="select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(" + p + ") where url like '%mysql%';" console.log("Started SQL for partiton: " + p); return Promise.all([ session.executeSql(sql) .execute(function (row) { console.log(p + ":" + row); res.push(row); total = Number(total) + Number(row); }).catch(err => { console.log(err); }), session.close() ]); }).catch(err => { console.log(err + "partition: " + p); }).then(function() { // All done if (res.length == partitions.length) { console.log("All done! Total: " + total); // can now sort "res" array if needed an display } }); }); }); session_pre.close(); }); console.log("Starting...");

The explanation

The idea here is rather simple:

  1. Find all the partitions for the table by using “select partition_name from information_schema.partitions”
  2. For each partition, run the query in parallel: create a connection, run the query with a specific partition name, define the callback function, then close the connection.
  3. As the callback function is used, the code will not be blocked, but rather proceed to the next iteration. When the query is finished, the callback function will be executed.
  4. Inside the callback function, I’m saving the result into an array and also calculating the total (actually I only need a total in this example).
    .execute(function (row) { console.log(p + ":" + row); res.push(row); total = Number(total) + Number(row); ...

Asynchronous Salad: tomacucumtoes,bersmayonn,aise *

This may blow your mind: because everything is running asynchronously, the callback functions will return when ready. Here is the result of the above script:

$ time node async_wikistats.js Starting... Started SQL for partiton: p0 Started SQL for partiton: p1 Started SQL for partiton: p2 Started SQL for partiton: p3 Started SQL for partiton: p4 Started SQL for partiton: p5 Started SQL for partiton: p7 Started SQL for partiton: p8 Started SQL for partiton: p6 Started SQL for partiton: p9 Started SQL for partiton: p10 Started SQL for partiton: p12 Started SQL for partiton: p13 Started SQL for partiton: p11 Started SQL for partiton: p14 Started SQL for partiton: p15 Started SQL for partiton: p16 Started SQL for partiton: p17 Started SQL for partiton: p18 Started SQL for partiton: p19 Started SQL for partiton: p20 Started SQL for partiton: p21 Started SQL for partiton: p22 Started SQL for partiton: p23

… here the script will wait for the async calls to return, and they will return when ready – the order is not defined.

Meanwhile, we can watch MySQL processlist:

+------+------+-----------------+-----------+---------+-------+--------------+-------------------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +------+------+-----------------+-----------+---------+-------+--------------+-------------------------------------------------------------------------------------------------------------------+ | 186 | root | localhost:44750 | NULL | Sleep | 21391 | cleaning up | PLUGIN | | 2290 | root | localhost | wikistats | Sleep | 1417 | | NULL | | 2510 | root | localhost:41737 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p0) where url like '%mysql%' | | 2511 | root | localhost:41738 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p1) where url like '%mysql%' | | 2512 | root | localhost:41739 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p2) where url like '%mysql%' | | 2513 | root | localhost:41741 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p4) where url like '%mysql%' | | 2514 | root | localhost:41740 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p3) where url like '%mysql%' | | 2515 | root | localhost:41742 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p5) where url like '%mysql%' | | 2516 | root | localhost:41743 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p6) where url like '%mysql%' | | 2517 | root | localhost:41744 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p7) where url like '%mysql%' | | 2518 | root | localhost:41745 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p8) where url like '%mysql%' | | 2519 | root | localhost:41746 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p9) where url like '%mysql%' | | 2520 | root | localhost:41747 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p10) where url like '%mysql%' | | 2521 | root | localhost:41748 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p11) where url like '%mysql%' | | 2522 | root | localhost:41749 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p12) where url like '%mysql%' | | 2523 | root | localhost:41750 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p13) where url like '%mysql%' | | 2524 | root | localhost:41751 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p14) where url like '%mysql%' | | 2525 | root | localhost:41752 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p15) where url like '%mysql%' | | 2526 | root | localhost:41753 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p16) where url like '%mysql%' | | 2527 | root | localhost:41754 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p17) where url like '%mysql%' | | 2528 | root | localhost:41755 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p18) where url like '%mysql%' | | 2529 | root | localhost:41756 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p19) where url like '%mysql%' | | 2530 | root | localhost:41757 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p20) where url like '%mysql%' | | 2531 | root | localhost:41758 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p21) where url like '%mysql%' | | 2532 | root | localhost:41759 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p22) where url like '%mysql%' | | 2533 | root | localhost:41760 | NULL | Query | 2 | Sending data | PLUGIN: select sum(tot_visits) from wikistats.wikistats_by_day_spark_part partition(p23) where url like '%mysql%' | | 2534 | root | localhost | NULL | Query | 0 | starting | show full processlist | +------+------+-----------------+-----------+---------+-------+--------------+-------------------------------------------------------------------------------------------------------------------+

And CPU utilization:

Tasks: 41 total, 1 running, 33 sleeping, 7 stopped, 0 zombie %Cpu0 : 91.9 us, 1.7 sy, 0.0 ni, 6.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 97.3 us, 2.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 97.0 us, 3.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 97.7 us, 2.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 95.7 us, 2.7 sy, 0.0 ni, 1.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 98.3 us, 1.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 98.3 us, 1.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 97.7 us, 2.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu8 : 96.7 us, 3.0 sy, 0.0 ni, 0.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu9 : 98.3 us, 1.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu10 : 95.7 us, 4.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu11 : 97.7 us, 2.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu12 : 98.0 us, 2.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu13 : 98.0 us, 1.7 sy, 0.0 ni, 0.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu14 : 97.7 us, 2.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu15 : 97.3 us, 2.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu16 : 98.0 us, 2.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu17 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu18 : 97.3 us, 2.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu19 : 98.7 us, 1.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu20 : 99.3 us, 0.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu21 : 97.3 us, 2.3 sy, 0.0 ni, 0.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu22 : 97.0 us, 3.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu23 : 96.0 us, 4.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st ... PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18901 mysql 20 0 25.843g 0.017t 7808 S 2386 37.0 295:34.05 mysqld

Now, here is our “salad”:

p1:2499 p23:2366 p2:2297 p0:4735 p12:12349 p14:1412 p3:2045 p16:4157 p20:3160 p18:8717 p17:2967 p13:4519 p15:5462 p10:1312 p5:2815 p7:4644 p9:766 p4:3218 p6:4175 p21:2958 p8:929 p19:4182 p22:3231 p11:4020

As we can see, all partitions are in random order. If needed, we can even sort the result array (which isn’t needed for this example as we only care about the total). Finally our result and timing:

All done! Total: 88935 real 0m30.668s user 0m0.256s sys 0m0.028s

Timing and Results

  • Original query, single thread: 5 minutes
  • Modified query, 24 threads in Node JS: 30 seconds
  • Performance increase: 10x

If you are interested in the original question (MySQL versus PostgreSQL, Jan 2008):

  • MySQL, total visits: 88935
  • PostgreSQL total visits: 17753

Further Reading:

PS: Original Asynchronous Salad Joke, by Vlad @Crazy_Owl (in Russian)

Categories: MySQL

Join me for the Community Open House for MongoDB

MySQL Performance Blog - Fri, 2016-05-27 18:36


If you can make it to Manhattan Thursday, June 30, 2016, please join me at the Community Open House for MongoDB. The Community Open House for MongoDB, held from 9 AM to 6 PM at the Park Central Hotel, is a free and open event that will feature technical presentations and sessions from key members of the MongoDB open source community. It’s the day after MongoDB World ends, so many of you will already be in town and ready to talk MongoDB – stay one more day to get an even bigger and broader perspective on your MongoDB needs!

The open source community is a diverse and powerful collection of companies, organizations and individuals that have helped to literally change the world. Percona is proud to call itself a member of the open source community, and we strongly feel that upholding the principles of the community is a key to our success. These principals include an open dialog, an open mind, and a zeal for cooperative interaction. Since Percona and ObjectRocket weren’t allowed to participate in MongoDB World, we felt this event would help foster creative and productive dialogue within the community. Together, we can build amazing things!

A reception will be held from 4:30 PM to 6:00 PM, featuring plenty of food, drink and fun.

Event Speakers:

The Community Open House for MongoDB features expert speakers from Percona, ObjectRocket, Facebook, Appboy, Severalnines and The Washington Post. Here’s a partial lineup – more will be announced soon!

  • Open Source Monitoring for MongoDB – Peter Zaitsev, Percona
  • Real-world Operational Concerns for Scaling MongoDB – Kim Wilkins, ObjectRocket
  • Doing More With Less With RocksDB and MongoRocks at Facebook – Islam AbdelRahman, Facebook
  • Tracking Billions of App Installs with MongoDB – Jon Hyman, Appboy
  • WiredTiger Scalability and Performance Optimization – Vadim Tkachenko, Percona
  • Using MongoDB and NodeJS to Build Better Forms at the Washington Post – Kat Styons, The Washington Post
  • Real-world Operational Concerns for Scaling MongoDB – Tim Banks, ObjectRocket
  • How to Automate, Monitor and Manage Existing MongoDB Servers – Art van Scheppingen, Severalnines
  • Open Source Encryption for MongoDB – Tim Banks, ObjectRocket
  • My First Moments with MongoDB – Colin Charles
  • Open Source Backups for MongoDB – David Murphy – Percona
  • MongoDB Chunks – Distribution, Splitting & Merging – Jason Terpko, ObjectRocket
  • MongoDB for MySQL Developers and DBAs – Alexander Rubin, Percona

This event is free of charge and open to all, but we do ask you to register in advance so we can save you a seat.

I hope to see you Thursday in NYC!

Categories: MySQL

Monitoring made easy with Percona App for Grafana

MySQL Performance Blog - Thu, 2016-05-26 17:57

Percona has released a new Percona App for Grafana!

Are you using Grafana 3.x with Prometheus’ time-series database? Now there is a “Percona App” available on Grafana.net! The app provides a set of dashboards for MySQL performance and system monitoring with Prometheus’ datasource, and make it easy for users install them. The dashboards rely on the alias label in the Prometheus config and depend on the small patch applied on Grafana.

The dashboards in the app are:

  • Cross Server Graphs
  • Disk Performance
  • Disk Space
  • Galera Graphs
  • MySQL InnoDB Metrics
  • MySQL MyISAM Metrics
  • MySQL Overview
  • MySQL Performance Schema
  • MySQL Query Response Time
  • MySQL Replication
  • MySQL Table Statistics
  • MySQL User Statistics
  • Prometheus
  • Summary Dashboard
  • System Overview
  • TokuDB Graphs
  • Trends Dashboard

The Grafana and Prometheus teams are doing a fantastic job of bringing monitoring and time-series to the next level. They are making collecting and graphing metrics simple and more usable.

See my previous blog post for step-by-step instructions on how to install Grafana and Prometheus. Get the Percona App for Grafana today!

Categories: MySQL

AWS Aurora Benchmarking part 2

MySQL Performance Blog - Thu, 2016-05-26 12:45

Some time ago, I published the article on AWS Aurora Benchmarking (AWS Aurora Benchmarking – Blast or Splash?), in which I analyzed the behavior of different solutions using synchronous replication in AWS environment. This blog follows up with some of the comments and suggestions I received regarding that post from the community and Amazon engineers.

I decided to perform another round of tests, keeping in mind comments and suggestions received.

I presented some of the results during the Percona conference in Santa Clara last April 2016. The following is the transposition that presentation, with more details.

Not interested in the preliminary descriptions? Go to the results section

Why new tests?

A very good question, with an easy answer.

Aurora is a product that is still under development and refinement: six months of development could present major changes in performance. Not only that, but the initial tests focused on entry-level solutions, meaning I was analyzing the kind of users that are currently starting their business and looking for a flexible solution that allows them to save money and scale.

This time, I put the focus on enterprise solutions by analyzing what an already well-established company would get when looking for a decent scalable solution.

These are two different scenarios.

Why so many (different) tests?

I used many different benchmarking tools, and I am still planning to run others. Why so? Why not simply use one of them?

Again, a simple answer. I used different tools because in some cases, they provide me a different way of accessing and using data. I also do not trust benchmarking tools, not even the ones I developed. I wanted to test the same thing using different tools and compare the results. ONLY if I see a common pattern, then would I consider the test valid. Personally, I tend to discard any test that is not consistent, or if the analysis performed is using a single benchmarking tool. In my opinion, being lazy is not an option when doing these kind of exercises.

About the tests

It was difficult to compare apples to apples here. And I think that is the main point to keep in mind.

Aurora is not a standard RDS solution, like we are used to. Aurora looks like MySQL, smells like MySQL, but is not vanilla MySQL. To achieve what they have, the engineers had to change many parts. The more you dig in, the more you realize there are significant differences.

Because of that, I had to focus more on identifying what each solution can do and compare the solutions against expectations, rather than comparing the numbers.

I was more interested to see what happen if:

  • I have a burst of connections, and my application goes from 4K to 40K connections. Will it crash? Will it slow down?
  • How long should I wait if a node fails?
  • What should I not have in my schema design, to prevent bottlenecks?

Those are relevant questions in my opinion, more so than discovering that solution A has 3000 rows written/sec, and solution B has 3100. Or that I might (might) have some additional page rotation, file -> memory-> flushes because the amount of memory differs.

That is valuable information, for sure, but less valuable than having a decent understanding of which platform will help my business grow and remain stable.

What is the right tool for the job? This is the question I am addressing.

Tests run

I had run three main kinds of tests:

  • Performance and load stress
  • High availability failover
  • Response time (latency) from the application point of view
Performance and load stress

These tests were the most extensive and demanding.

I analyzed the capacity to serve the load under different conditions, from a light load up to full utilization, and some degree of resource saturation.

  • The first set of tests were to evaluate a simple load on a single table, causing the table to become a hotspot and showing how the platform would manage the increasing contention.
  • The second set of tests were to perform a similar load, but distributing it cross multiple tables and batching the operations. Parallelization, contention, scalability and distributed hotspots were in the picture.

The two above focused on write operations only, and were done using different tools (comparing the results as they were complementary).

  • Third set of tests, using my own stress tool, were focused on R/W oriented usage. The tests were executed against multiple tables, performing CRUD actions, using simple and batch insert, reads by PK, index, by range, IN and exact match conditions.
  • The fourth set of tests were performed using a TPC-C like load (OLTP).
  • The fifth set of tests were using sysbench in OLTP mode, with 250 tables.

The scope of the last three set of tests was to identify how the platforms would manage the load, considering the following:

  • Read and write contention on the same tables
  • High level of parallelism (from the application)
  • Possible hot-spots (TPCC district)
  • Increasing utilization (memory, threads, IO)
  • Saturation (connections)

Finally, all tests were run with fully utilized BufferPool.

The machines

Small boxes (first round of tests):

EIP = 1 VPC = 1 ELB=1 Subnets = 4 (1 public, 3 private) HAProxy = 6 MHA Monitor (micro ec2) = 1 NAT Instance (EC2) =1 (hosting EIP) DB Instances (EC2) = 3 (m4.xlarge) 16GB Application Instances (EC2) = 6 (4) EBS SSD 3000 PIOS Aurora RDS node = 3 (db.r3.xlarge) 30GB

Large boxes (latest tests):

EIP = 1 VPC = 1 ELB=1 Subnets = 4 (1 public, 3 private) HAProxy = 4 MHA Monitor (micro ec2) = 1 NAT Instance (EC2) =1 (hosting EIP) DB Instances (EC2) = 3 (c3.8xlarge) 60GB Application Instances (EC2) = 4 EBS SSD 5000 PIOS Aurora RDS node = 3 (db.r3.8xlarge) 244GB

A note

It was pointed out to me that I deliberately chose to use an Ec2 solution for Percona XtraDB Cluster with less memory than the one available in Aurora. This is true, and we must take that into consideration. The reason for this is that the only Ec2 solution matching the memory of a db_r3.8xlarge is the d2.8xlarge.

I did try it, but the level of scalability I got (from the CPU point of view) was less efficient than the one available with c3.8xlarge. I decided to prefer CPU resources to memory, especially because I was going to test concurrency and parallelism in conjunction with the load increase.

From the result, I feel confident that I chose correctly – but I am open to comment.

The layout

This is what the setup looks like:

Where you read Java, those are the application nodes running the different test applications.

Two words about Aurora first

Aurora has a few key concepts that we must have clearly in mind, especially how it manages the writes across replica, and how connections are implemented.

The IO activity

To replicate the information across the different storage, Aurora only replicates FRM files and data coming from IB_LOGS. This is a quite significant advantage to other forms of replication, given the limited number of bytes that are replicated over the network (and also if they are replicated six times).

Another significant advantage is that Aurora does not use a double write buffer, which is obviously another blast (see the recent optimization in Percona Server https://www.percona.com/blog/2016/05/09/percona-server-5-7-parallel-doublewrite/ ).

In other words, writes in Aurora are organized by filling its commit queue and pushing the changes as group commit to the storage.

In some presentations, you might have seen that all steps are asynchronous. But is important to underline that a commit is acknowledged by Aurora when at least two availability zones (AZ) have received and written the incoming data related to that commit. Writes here mean received in the storage node incoming queue and with a quorum of four over six nodes.

This means that no matter what, data has to travel on the network to reach the final destination, and ACK signals come back before Aurora returns the ACK to the commit operation. The network is in the same region, but still it could represent an incognita about performance. No wonder we could have some latency at this stage!

As you can see, what I am reporting is also confirmed in the image below (and in the observations). The point is that the impact of steps 1 – 2 is not obviously clear.

Thread pooling

Aurora also use thread pooling – a lot! That will become very clear later, and as more of the work is based on parallelism, the more efficient thread pooling seems to be.

In most cases we are used to seeing CPUs on database servers not fully utilized, unless there is some heavy ordering operation or a bad query. That behavior is also (not only) a direct consequence of the connection-to-thread model, which implies a period of latency and stand by. In Aurora, the incoming connections are not following the same model. Instead, the pool redistributes the load of the incoming connection to a pool of threads, optimizing the latency period, resulting in a higher CPU utilization. Which is what you want from your resource: to be utilized and not waiting for something else to do its job.

 

The results

Without wasting more electronic ink, let see what comes out of this round of tests (not the final one by the way). To simplify the results, I will also report the graphs from the first set of tests, but will focus on the latest.Small Boxes = SB, Large Boxes LB.

Small Boxes = SB, Large Boxes = LB.

First Test: IIBench

As declared previously, my scope was to verify how the two platforms would have reacted to a simple load focus on inserts with a basic single table. The bufferpool was saturated before running the test.

SB

LB

As we can see, in the presence of a hot spot the solution using Percona XtraDB Cluster outperformed Aurora, in both cases. What is notable, though, is that while XtraDB Cluster remained approximately around the same time/performance, Aurora is significantly reduced the time taken. This shows that Aurora was taking advantage of the more powerful platform, while XtraDB Cluster was not able to.

With further analyzation of the details, we notice that Aurora performs better atomically. It was able to manage more writes/second as well as rows and pages managed. But it was inconsistent: Aurora had performance hiccups at regular intervals. As such the final result was that it took more time to process the whole workload.

I was not able to dig to deeply, given some metrics are not fully available in Aurora. As such I had to rely fully on Aurora engineers, who mentioned to me that hot-spot contention was a possible issue.

Aurora Handler calls:

XtraDB Cluster Handlers:

The execution in XtraDB Cluster showed fewer calls but constant performance, while Aurora has hiccups.

Aurora page activity write:

XtraDB Cluster page activity write:

The trend shown by the handlers stayed consistent in the page management and rows insert, as expected.

Second Test: Application Ingest

As mentioned, this test showed many threads from different application servers, inserted by a batch of 50 statements against multiple tables.

The results coming from this test are quite favorable to Aurora, as we can see starting from the time taken to complete the same workload:

LB

SB

With small ones, the situation was inverted.

But here is where the interesting part starts.

Aurora can manage significantly higher numbers of rows, as the picture below shows:

The results are also constant, and don’t decrease significantly like the inserts with XtraDB Cluster.

The number of handler commits, however, are significantly less.

Once more they stay the same with the load increase, without impacting performance.

Reviewing all handler calls, we get our first surprise.

XtraDB Cluster handler calls:

Aurora handler calls:

The gap/drop existing in the two graphs are the different tests (with an increasing number of threads).

Two things to notice here: the first one is that XtraDB Cluster decreases in performance while processing the load, while Aurora does not. The second (you need to zoom the image) is the number of commits is floating in XtraDB Cluster, while it stays fixed in Aurora.

An even bigger surprise comes up when reviewing the connections graphs.

As expected, XtraDB Cluster has all my connections open, and the number of threads running is quite close to the number of connected threads.

Both of them follow the increasing number of connected threads.

But this is not the case in Aurora.

Also, if my applications are trying to open ~800 threads, the Aurora node see only a part of them, and the number of running is fixed to 32 threads.

The important things to consider here are that a) my applications don’t connect directly to the Aurora instance, but to a connector (MariaDB), and b) that Aurora, in this case, caps the number of running threads to the number of CPU available on the instance (here 32).

Given that, I expected to have worse performance (but I don’t). The fact that Aurora uses one thread for multiple connections seems to be working quite efficiently.

The number of rows inserted is also consistent with the handler calls, and has better performance than XtraDB Cluster.

Aurora rows inserted:

XtraDB Cluster rows inserted

Again we have the same trend, only, this time, Aurora performs better than XtraDB Cluster.

Third Test: OLTP Application

When run on the small boxes, this test saw XtraDB Cluster performing much better than Aurora. The time taken by Aurora was ~3 times the time taken by XtraDB Cluster.

With a large box, I had the inverse result: Aurora is outperforming XtraDB Cluster from 2 to 7 times the speed.

Analyzing the number of commands executed with the increasing workload, we can see how XtraDB Cluster can perform better than Aurora with a workload of 128 threads, but starts to have worse performance as the load increases.

On the other hand, Aurora manages the read/write load without significant performance loss, which includes being able to increase the number of commits/sec.

Reviewing the handler calls, we see that the handler commit calls are significantly less in Aurora (as already noticed in the ingest tests).

Another thing to note is that the number of calls for XtraDB Cluster is significantly higher and not scaling, while Aurora has a nice scaling trend.

Fourth Test: TPCC-mysql

The TPCC test is mainly to test OLTP traffic, with the note that some tables (like district) might become a hotspot. The tests I ran were executed against 400 warehouses, and used 128 threads maximum for the small box and 2048 threads for the large box.

During this test, I hit one of the Aurora limitations and I escalated it to the Aurora engineers (who are aware of the problem).

Small boxes:

In the case of small boxes, there is nothing to say: XtraDB Cluster manages the load more efficiently. This trend is not optimal, having significant fluctuation. Aurora is just not able to keep it up.

Large boxes:

 

It is a different and a more complex scenario in the case of the use of large boxes. I would like to say that Aurora performs better.

This is true for two of the three tests, and up to when it got stuck by internal limitation Aurora was also performing better on the third. But then its performance just collapsed.

With a more in-depth investigation, I noticed that under the hood Aurora was not performing as well as it appeared. This comes out quite clearly by looking at a comparison between the graphs covering Comm_ execution, open files, handlers and InnoDBrow lock time.

In all of them it is evident how XtraDB Cluster keeps serving the workload with consistent behavior, while Aurora fails the second test on (512 threads) — not just on the third with 2048 threads.

Aurora:

XtraDB Cluster:

It is clear that Aurora was better served during the test with 256 threads going over the 450K com select serve (in 10 sec interval), compared with XtraDB Cluster that was not able to go over 350K.

But in the following tests, while XtraDB Cluster was able to keep going (with decreasing performance), Aurora started to struggle with very inconsistent behavior.

This was also confirmed by the open files graph.

Aurora:

XtraDB Cluster:

The graphs show the instances of files open during the test, not the ones already open. It reflects the Open_file metric “The number of files that are open. This count includes regular files opened by the server. It does not include other types of files such as sockets or pipes. Also, the count does not include files that storage engines open using their own internal functions rather than asking the server level to do so.”

I was quite surprised by the number of files open by Aurora.

Handlers reflected the same behavior, as well.

Aurora:

XtraDB Cluster:

Perfectly in line with the com trend.

So what was increasing in reverse?

Aurora:

XtraDB Cluster:

As you can see from the above, the exactly same workload generated an increasing lock row time, from quite low in the test with 256 threads, up to a crazy high with 2048 threads.

As mentioned, we know that TPCC has a couple of tables that act as hotspots, and we already saw with IIbench how Aurora is not working efficiently in that case.

I also was getting a lot of 188 errors during the test. This is an Aurora internal error. When I reported it, I was told they know about it, and they are planning to work on it.

I hope they do soon, because if this issue is solved it is very likely that Aurora will not only be able to manage the tested workload, but exceed it by far.

I am saying this because also with the identified issues Aurora was able to keep going and manage a more than decent response time during the second test (with 512 threads).

Fifth Test: Sysbench

I added the sysbench tests to test scalability, and to see the what happens when the system reaches a saturation point. This test brought up some limitations existing in the Aurora solution, related more to the connector than the Aurora engine itself.

Aurora has a limit of 16k connections. I wanted to see what happens if I got to saturation point or close to it. It doesn’t matter if this is a ridiculously high number or not.

What happened was that Aurora managed traffic up to 4K. The closer I got to the limit, however, the more I had a connectivity issue. At the end I had to run the test with 8K, 12K and 20K threads pointing directly to the Aurora instance, bypassing the connector that was not able to serve the traffic. After that, I was able to hit up to ~15500 threads (but with a lot of inconsistent performance). I am defining the limit of a meaningful test from the previous level of 12K threads.

XtraDB Cluster was able to scale up to 16K no problem.

What also is notable here is that Aurora was able to manage the workload more efficiently regarding transaction handling (i.e., as transactions executed and latency).

The number of transactions executed by Aurora was ~three times the one executed by XtraDB Cluster.

Regarding latency, Aurora showed less latency then XtraDB Cluster.

Internally, Aurora and XtraDB Cluster operations were once again different regarding how the workload was handled. The most divergent result was the handler calls:

Commit calls in Aurora were a fraction of the calls in XtraDB Cluster, while the number of rollbacks was higher.

The read calls had an even more divergent behavior, with XtraDB Cluster performing a higher number of read_keys, while Aurora was having a very limited number of them. Read_rnd are very high in XtraDB Cluster, but totally absent in Aurora (note that in Aurora, read_rnds are reported but seem not to increase). On the other hand, Aurora reported a high number of read_rnd_next, while XtraDB Cluster has none.

HA availability Fail-over time

Both solutions:

In this test, the fail-over time for the solution using Galera and HAProxy was more efficient. For both a limited or mid-level load. One assumption is that given Aurora has to verify both the status of the data transmitted and its consistency across the six data store nodes in every case; the process is not as fast as it could be.

It could also be that the cluster connector is not as efficient as it should in redirecting the traffic from one node to another. It would be a very interesting exercise to replace it with some other custom solution.

Note that I was performing the tests following the Amazon recommendation to use the following to simulate a real crash:

ALTER SYSTEM CRASH [INSTANCE|NODE]

As such, I was not doing anything strange or out of the ordinary.

It is worth mentioning that of the eight seconds taken by MySQL/Galera to perform the failover, six were due to the HAProxy settings (which had a 3000 ms interval and two loops in the settings before executing failover).

Execution latency

The purpose of these tests was to identify the latency existing between the moment that application sends the request and the moment MySQL/Aurora took the request in “charge”. The expectation is that the busier the database, the higher the latency.

For this test, I reported both results: the one coming from the old tests with the small box, and the new one with the large box.

Small boxes:

Large boxes:

It is clear from the graphs that the two tests report different scenarios. In the first, Galera was able to manage the load more efficiently and serve requests with lower latency. For the new tests, I had used a higher number of threads than the ones for the small box. Nevertheless, in the second test the CPU utilization and the number of running threads lead me to think that Aurora was finally able to utilize resources more efficiently and the lower latency.

The latency jumped up again when the number of connections rose above 12K, but that was expected given previous tests results.

Conclusions High Availability

The two platforms were able to manage the failover operation in a limited time frame (below 1 minute). Nevertheless, MySQL/Galera was shown to be more efficient and consistent. This result is a direct consequence of synchronous replication, which by design prevents MySQL/Galera from allowing an active node to fall behind.

In my opinion, the replication method used in Aurora is efficient, and given that data is shared across the read replicas, fail-over should happen faster.

The tests suffered because of the connector, and I have the feeling that having another solution in place may bring some surprises (actually, I would like to test that as well).

Performance

In this run of tests, Aurora was able to invert the results I had in the first test with the small boxes. In almost all cases, Aurora performed as well or better then XtraDB Cluster. There are still cases where Aurora is penalized, and those are the ones where hotspots are present. The contention in Aurora is killing performance, and raise errors (188). But I hope we will see a significant evolution soon.

General Comments on Aurora

The product is evolving quickly, and benchmark results may become obsolete in very short time (this is why it is important to have repeatable and comparable tests). From my point of view, in this set of tests Aurora clearly shows where it’s a better fit: higher-end levels, where high availability and CPU power is the focus (not concerns about the cost).

There is no reason to use Aurora in small-mid boxes: the platform is not going to be as efficient as a standard solution like XtraDB Cluster. But if cost is not an issue, and the applications require a lot of parallelism, Aurora on db.r3.8xlarge is a good solution.

I still see space for improvements (like for cluster connectors, or the time taken to restart a cluster after a full stop, or contention reduction). But I am also confident that the work led by the development team will fix most of my concerns (and more) soon.

Final note: it would be nice to have the code open source, so that the community could contribute (but I understand the business reasons not to).

About Cost

I don’t think it is this the right place to mention the cost of each solution (especially because each need is different).

As such, I am not reporting any specific numbers. You can, however, follow the links below and do the necessary math:

Aurora cost calculator

AWS cost calculator

 

Categories: MySQL
Syndicate content