MySQL

Performance Schema: Measure Multi-Threaded Slave Activity

MySQL Performance Blog - Fri, 2015-12-18 23:30
Performance Schema

In many types of database workloads, using a multi-threaded slave from 5.6+ helps improve replication performance. I’ve had a number of users enable this feature, but have not seen anyone ask how each thread is performing. Here’s a quick way with Performance_Schema to measure the amount of multi-threaded slave activity on each thread (after you have already configured MTS on your slave of course ).

First, we need to enable the statements instruments:

slave1> UPDATE setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE 'events_statements_%'; Query OK, 2 rows affected (0.00 sec) Rows matched: 3 Changed: 2 Warnings: 0

Next, let’s find the THREAD_ID for our slave workers:

slave1> SELECT THREAD_ID, NAME FROM threads WHERE NAME LIKE '%worker'; +-----------+-------------------------+ | THREAD_ID | NAME | +-----------+-------------------------+ | 28 | thread/sql/slave_worker | | 29 | thread/sql/slave_worker | | 30 | thread/sql/slave_worker | | 31 | thread/sql/slave_worker | +-----------+-------------------------+ 4 rows in set (0.00 sec)

And then we capture measurements:

slave1> SELECT THREAD_ID, EVENT_NAME, COUNT_STAR FROM events_statements_summary_by_thread_by_event_name WHERE THREAD_ID IN (28,29,30,31) AND COUNT_STAR > 0; +-----------+----------------------+------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | +-----------+----------------------+------------+ | 28 | statement/sql/update | 48 | | 28 | statement/sql/insert | 24 | | 28 | statement/sql/delete | 24 | | 28 | statement/sql/begin | 24 | | 29 | statement/sql/update | 68 | | 29 | statement/sql/insert | 34 | | 29 | statement/sql/delete | 34 | | 29 | statement/sql/begin | 34 | | 30 | statement/sql/update | 1864 | | 30 | statement/sql/insert | 932 | | 30 | statement/sql/delete | 932 | | 30 | statement/sql/begin | 932 | | 31 | statement/sql/update | 40320 | | 31 | statement/sql/insert | 20160 | | 31 | statement/sql/delete | 20160 | | 31 | statement/sql/begin | 20160 | +-----------+----------------------+------------+ 16 rows in set (0.01 sec)

As you can see above, one of the multi-threadedthreads is doing more work compared to the other three. This is a sysbench on the master in the database doing --trx-rate=10, compared to the other three which are only doing --trx-rate=2|5.

Below is another example, this time including the event times. With a slightly modified sysbench test, I’ve designed the tests to send large updates but one with a low transaction rate, another with a high transaction rate, and the rest in between. You can see specifically in threads 32 and 34, that the former is performing a lower amount of UPDATES, but spending more time per event.

slave1> SELECT THREAD_ID, EVENT_NAME, COUNT_STAR, -> SUM_TIMER_WAIT/1000000 AS SUM_TIMER_WAIT_MS, -> (SUM_TIMER_WAIT/COUNT_STAR)/1000000 AS AVG_TIME_MS -> FROM events_statements_summary_by_thread_by_event_name -> WHERE THREAD_ID IN (31,32,33,34) AND COUNT_STAR > 0; +-----------+----------------------+------------+-------------------+--------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | AVG_TIME_MS | +-----------+----------------------+------------+-------------------+--------------+ | 31 | statement/sql/update | 3 | 403.9690 | 134.65633333 | | 31 | statement/sql/insert | 1 | 91.5340 | 91.53400000 | | 31 | statement/sql/delete | 1 | 75.9540 | 75.95400000 | | 31 | statement/sql/begin | 1 | 11.6800 | 11.68000000 | | 32 | statement/sql/update | 75 | 25298.5090 | 337.31345333 | | 32 | statement/sql/insert | 25 | 2221.1410 | 88.84564000 | | 32 | statement/sql/delete | 25 | 2187.8970 | 87.51588000 | | 32 | statement/sql/begin | 25 | 321.0220 | 12.84088000 | | 33 | statement/sql/update | 4728 | 1008736.6000 | 213.35376481 | | 33 | statement/sql/insert | 1576 | 138815.0570 | 88.08061992 | | 33 | statement/sql/delete | 1576 | 136161.9060 | 86.39714848 | | 33 | statement/sql/begin | 1576 | 20498.2300 | 13.00649112 | | 34 | statement/sql/update | 70668 | 12304784.2380 | 174.12101995 | | 34 | statement/sql/insert | 23556 | 2083454.5350 | 88.44687277 | | 34 | statement/sql/delete | 23556 | 2041202.7690 | 86.65319957 | | 34 | statement/sql/begin | 23556 | 303710.4860 | 12.89312642 | +-----------+----------------------+------------+-------------------+--------------+ 16 rows in set (0.00 sec)

Exploring a bit further, one thing I noticed is that when using binlog_format=ROW  for replication, you will only see the BEGIN  events instrumented from performance_schema. I think this is a bug so I reported it here.

slave1> SELECT THREAD_ID, EVENT_NAME, COUNT_STAR, -> SUM_TIMER_WAIT/1000000 AS SUM_TIMER_WAIT_MS, -> (SUM_TIMER_WAIT/COUNT_STAR)/1000000 AS AVG_TIME_MS -> FROM events_statements_summary_by_thread_by_event_name -> WHERE THREAD_ID IN (40,41,42,43) AND COUNT_STAR > 0; +-----------+---------------------+------------+-------------------+-------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | AVG_TIME_MS | +-----------+---------------------+------------+-------------------+-------------+ | 40 | statement/sql/begin | 16 | 258.6500 | 16.16562500 | | 41 | statement/sql/begin | 91 | 1526.4320 | 16.77397802 | | 42 | statement/sql/begin | 589 | 10131.4500 | 17.20110357 | | 43 | statement/sql/begin | 5022 | 85584.0250 | 17.04182099 | +-----------+---------------------+------------+-------------------+-------------+ 4 rows in set (0.01 sec)

With MySQL 5.7, MTS has an additional improvement with how parallel replication is applied. By default, transactions are applied in parallel per database. With the new slave_parallel_type feature, another option called LOGICAL_CLOCK  is introduced that allows parallelization intra-database as well. I did some small tests below, with same workload as above. First with the default DATABASE type:

+-----------+----------------------+------------+-------------------+--------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | AVG_TIME_MS | +-----------+----------------------+------------+-------------------+--------------+ | 65 | statement/sql/update | 51 | 9974.1310 | 195.57119608 | | 65 | statement/sql/insert | 17 | 2037.7080 | 119.86517647 | | 65 | statement/sql/delete | 17 | 2144.4800 | 126.14588235 | | 65 | statement/sql/begin | 17 | 293.0650 | 17.23911765 | | 66 | statement/sql/update | 255 | 67131.4980 | 263.26077647 | | 66 | statement/sql/insert | 85 | 9629.5520 | 113.28884706 | | 66 | statement/sql/delete | 85 | 9854.2920 | 115.93284706 | | 66 | statement/sql/begin | 85 | 1405.0200 | 16.52964706 | | 67 | statement/sql/update | 1215 | 344712.6330 | 283.71410123 | | 67 | statement/sql/insert | 405 | 48000.0110 | 118.51854568 | | 67 | statement/sql/delete | 405 | 53222.3010 | 131.41308889 | | 67 | statement/sql/begin | 405 | 6836.9070 | 16.88125185 | | 68 | statement/sql/update | 5943 | 1820669.3290 | 306.35526317 | | 68 | statement/sql/insert | 1981 | 241513.1400 | 121.91476022 | | 68 | statement/sql/delete | 1981 | 245022.2450 | 123.68614084 | | 68 | statement/sql/begin | 1981 | 32978.3390 | 16.64731903 | +-----------+----------------------+------------+-------------------+--------------+

And then with LOGICAL_CLOCK:

+-----------+----------------------+------------+-------------------+--------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | AVG_TIME_MS | +-----------+----------------------+------------+-------------------+--------------+ | 59 | statement/sql/update | 9486 | 2667681.7170 | 281.22303574 | | 59 | statement/sql/insert | 3162 | 376350.1650 | 119.02282258 | | 59 | statement/sql/delete | 3162 | 388606.5460 | 122.89897090 | | 59 | statement/sql/begin | 3162 | 53988.2600 | 17.07408602 | | 60 | statement/sql/update | 903 | 362853.2080 | 401.83079513 | | 60 | statement/sql/insert | 301 | 36507.3090 | 121.28674086 | | 60 | statement/sql/delete | 301 | 37437.2550 | 124.37626246 | | 60 | statement/sql/begin | 301 | 5210.4110 | 17.31033555 | | 61 | statement/sql/update | 42 | 23394.0330 | 557.00078571 | | 61 | statement/sql/insert | 14 | 1671.5900 | 119.39928571 | | 61 | statement/sql/delete | 14 | 1720.1230 | 122.86592857 | | 61 | statement/sql/begin | 14 | 246.1490 | 17.58207143 | +-----------+----------------------+------------+-------------------+--------------+

With LOGICAL_LOCK  and slave_preserve_commit_order  enabled:

+-----------+----------------------+------------+-------------------+--------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | AVG_TIME_MS | +-----------+----------------------+------------+-------------------+--------------+ | 71 | statement/sql/update | 8097 | 2216461.7170 | 273.73863369 | | 71 | statement/sql/insert | 2699 | 322933.1620 | 119.64918933 | | 71 | statement/sql/delete | 2699 | 326944.2380 | 121.13532345 | | 71 | statement/sql/begin | 2699 | 45860.5720 | 16.99169026 | | 72 | statement/sql/update | 807 | 256668.2730 | 318.05238290 | | 72 | statement/sql/insert | 269 | 32952.5570 | 122.50021190 | | 72 | statement/sql/delete | 269 | 33346.3060 | 123.96396283 | | 72 | statement/sql/begin | 269 | 4650.1010 | 17.28662082 | | 73 | statement/sql/update | 33 | 6823.1170 | 206.76112121 | | 73 | statement/sql/insert | 11 | 1512.5810 | 137.50736364 | | 73 | statement/sql/delete | 11 | 1593.5790 | 144.87081818 | | 73 | statement/sql/begin | 11 | 188.3910 | 17.12645455 | +-----------+----------------------+------------+-------------------+--------------+

Combining the INSERT, UPDATE and DELETE counts from these 3 tests, LOGICAL_CLOCK  tops with around 17k events within 120 seconds of the same workload. The DATABASE  type was the slowest, recording only about 12k events committed on the slave.

Lastly, with MySQL 5.7 a new instrument has been added and enabled for metrics to be collected. You can also measure the per-transaction rate instead of per event class.

slave1> UPDATE setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE 'vents_transactions%'; Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 slave1> UPDATE setup_instruments SET ENABLED = 'YES', TIMED = 'YES' WHERE NAME = 'transaction'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0

And a quick measurement:

slave1> SELECT THREAD_ID, EVENT_NAME, COUNT_STAR, -> SUM_TIMER_WAIT/1000000 AS SUM_TIMER_WAIT_MS, -> (SUM_TIMER_WAIT/COUNT_STAR)/1000000 AS AVG_TIME_MS -> FROM events_transactions_summary_by_thread_by_event_name -> WHERE THREAD_ID IN ( -> SELECT THREAD_ID FROM replication_applier_status_by_worker -> ) AND COUNT_STAR > 0; +-----------+-------------+------------+-------------------+----------------+ | THREAD_ID | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | AVG_TIME_MS | +-----------+-------------+------------+-------------------+----------------+ | 71 | transaction | 4849 | 50323942.6710 | 10378.21049103 | | 72 | transaction | 487 | 6421399.4580 | 13185.62517043 | | 73 | transaction | 17 | 319024.9280 | 18766.17223529 | +-----------+-------------+------------+-------------------+----------------+ 3 rows in set (0.00 sec)

Of course, not only statement events and transactions can be measured – you can also measure wait times and stages as well. Happy instrumenting!

The post Performance Schema: Measure Multi-Threaded Slave Activity appeared first on MySQL Performance Blog.

Categories: MySQL

Sneak peek at the Percona Live Data Performance Conference and Expo 2016

MySQL Performance Blog - Fri, 2015-12-18 00:43

On behalf of the Percona Live Conference Committee, I am excited to announce the sneak peek schedule for the Percona Live Data Performance Conference and Expo 2016!

Percona Live Data Performance Conference and Expo Live 2016 will feature a variety of formal tracks and sessions related to MySQL, NoSQL and Data in the Cloud. With over 150 slots to fill, there will be no shortage of great content this year. Though the entire conference schedule won’t be finalized until mid-January, this preview list of talks is sure to whet your appetite! So without further ado, here is the SNEAK PEEK SCHEDULE!

Just a reminder to everyone out there: our Super Saver discount rate for the Percona Live Data Performance and Expo 2016 is only available ‘til December 31! This rate gets you all the excellent and amazing opportunities that Percona Live offers, at a discount.

We also have a limited opportunity for a higher discount! Click here to find out about it!

Become a conference sponsor! We have sponsorship opportunities available for the world’s largest annual MySQL event. Sponsors become a part of a dynamic and growing ecosystem and interact with more than 1,000 DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solutions vendors, and entrepreneurs who attend the event.

See you all at Percona Live Data Performance and Expo 2016!

The post Sneak peek at the Percona Live Data Performance Conference and Expo 2016 appeared first on MySQL Performance Blog.

Categories: MySQL

used_key_parts: EXPLAIN FORMAT=JSON provides insight into which part of multiple-column key is used

MySQL Performance Blog - Fri, 2015-12-18 00:20

In the previous post for this ongoing “EXPLAIN FORMAT=JSON is Cool!” series, we discussed covered indexes and how the used_columns  array can help to choose them wisely. There is one more type of multiple-column indexes: composite indexes. Composite indexes are just indexes on multiple columns. Covered indexes are a subgroup of the larger set “composite indexes.” In this post we’ll discuss how “used_key_parts” can help show which part of a multiple column key is being used.

You should prioritize using composite indexes when you have queries that search on both a set of multiple columns and a single column. For example, if you run queries like:

SELECT first_name, last_name FROM employees WHERE first_name='Steve'; SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%'; SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%' and hire_date > '1990-01-01';

It would be better to have a single index on the first_name, last_name  and hire_date  columns rather than three indexes on first_name, a composite on (first_name, last_name) and a composite on (first_name, last_name, hire_date). But what is the best method of testing the effectiveness of the new index?

Once again, the answer is EXPLAIN FORMAT=JSON.

To illustrate this idea, let’s add a composite index on (first_name, last_name, hire_date) to the table “employees” from the standard employees database:

mysql> alter table employees add index comp (first_name, last_name, hire_date); Query OK, 0 rows affected (9.32 sec) Records: 0 Duplicates: 0 Warnings: 0

Now lets check if this index is used to resolve our queries:

mysql> explain SELECT first_name, last_name FROM employees WHERE first_name='Steve'; +----+-------------+-----------+------------+------+---------------+------+---------+-------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+-------+------+----------+-------------+ | 1 | SIMPLE | employees | NULL | ref | comp | comp | 16 | const | 245 | 100.00 | Using index | +----+-------------+-----------+------------+------+---------------+------+---------+-------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) mysql> explain SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%'; +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+--------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+--------------------------+ | 1 | SIMPLE | employees | NULL | range | comp | comp | 34 | NULL | 8 | 100.00 | Using where; Using index | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+--------------------------+ 1 row in set, 1 warning (0.00 sec) mysql> explain SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%' and hire_date > '1990-01-01'; +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+--------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+--------------------------+ | 1 | SIMPLE | employees | NULL | range | comp | comp | 37 | NULL | 8 | 33.33 | Using where; Using index | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+--------------------------+ 1 row in set, 1 warning (0.00 sec)

It is used in all queries, and key_len is increasing – which shows that each query is  using more parts of the index. But which part of index was actually used to resolve the WHERE  condition, and which was used to retrieve rows?

EXPLAIN FORMAT=JSON  stores this information in the used_key_parts  member.

For the first two queries, the following result is shown:

mysql> explain format=json SELECT first_name, last_name FROM employees WHERE first_name='Steve'G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "51.22" }, "table": { "table_name": "employees", "access_type": "ref", "possible_keys": [ "comp" ], "key": "comp", "used_key_parts": [ "first_name" ], "key_length": "16", "ref": [ "const" ], "rows_examined_per_scan": 245, "rows_produced_per_join": 245, "filtered": "100.00", "using_index": true, "cost_info": { "read_cost": "2.22", "eval_cost": "49.00", "prefix_cost": "51.22", "data_read_per_join": "11K" }, "used_columns": [ "first_name", "last_name" ] } } } 1 row in set, 1 warning (0.00 sec)

Only the first_name  field of the index was used for the query with the WHERE first_name='Steve' condition.

mysql> explain format=json SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%'G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "4.24" }, "table": { "table_name": "employees", "access_type": "range", "possible_keys": [ "comp" ], "key": "comp", "used_key_parts": [ "first_name", "last_name" ], "key_length": "34", "rows_examined_per_scan": 8, "rows_produced_per_join": 8, "filtered": "100.00", "using_index": true, "cost_info": { "read_cost": "2.65", "eval_cost": "1.60", "prefix_cost": "4.25", "data_read_per_join": "384" }, "used_columns": [ "first_name", "last_name" ], "attached_condition": "((`employees`.`employees`.`first_name` = 'Steve') and (`employees`.`employees`.`last_name` like 'V%'))" } } } 1 row in set, 1 warning (0.00 sec)

And two fields, first_name and last_name , were used for the second query.

"key": "comp", "used_key_parts": [ "first_name", "last_name" ],

But surprisingly the same result happens for the last query, although it queries column hire_date  too:

mysql> explain format=json SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%' and hire_date > '1990-01-01'G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "4.24" }, "table": { "table_name": "employees", "access_type": "range", "possible_keys": [ "comp" ], "key": "comp", "used_key_parts": [ "first_name", "last_name" ], "key_length": "37", "rows_examined_per_scan": 8, "rows_produced_per_join": 2, "filtered": "33.33", "using_index": true, "cost_info": { "read_cost": "3.71", "eval_cost": "0.53", "prefix_cost": "4.25", "data_read_per_join": "127" }, "used_columns": [ "first_name", "last_name", "hire_date" ], "attached_condition": "((`employees`.`employees`.`first_name` = 'Steve') and (`employees`.`employees`.`last_name` like 'V%') and (`employees`.`employees`.`hire_date` > '1990-01-01'))" } } } 1 row in set, 1 warning (0.00 sec)

This is most likely because there are too many values in the hire_date  column that satisfy the conditions, so it is easier to retrieve a data set using part of the index and then check  the condition for the hire_date column.

This means what since we don’t retrieve hire_date, we can drop it from the index. We might be a bit leary as to what table rows will be accessed to perform final comparison with hire_date  column, but in this case it’s fine:

mysql> flush status; Query OK, 0 rows affected (0.06 sec) mysql> SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%' and hire_date > '1990-01-01'; +------------+-----------+ | first_name | last_name | +------------+-----------+ | Steve | Vecchi | | Steve | Veldwijk | | Steve | Vickson | +------------+-----------+ 3 rows in set (0.00 sec) mysql> show status like 'Handler%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | Handler_commit | 1 | | Handler_delete | 0 | | Handler_discover | 0 | | Handler_external_lock | 2 | | Handler_mrr_init | 0 | | Handler_prepare | 0 | | Handler_read_first | 0 | | Handler_read_key | 1 | | Handler_read_last | 0 | | Handler_read_next | 8 | | Handler_read_prev | 0 | | Handler_read_rnd | 0 | | Handler_read_rnd_next | 0 | | Handler_rollback | 0 | | Handler_savepoint | 0 | | Handler_savepoint_rollback | 0 | | Handler_update | 0 | | Handler_write | 0 | +----------------------------+-------+ 18 rows in set (0.00 sec) mysql> alter table employees drop index comp; Query OK, 0 rows affected (0.27 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table employees add index comp (first_name, last_name); Query OK, 0 rows affected (7.57 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> flush status; Query OK, 0 rows affected (0.03 sec) mysql> SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%' and hire_date > '1990-01-01'; +------------+-----------+ | first_name | last_name | +------------+-----------+ | Steve | Vecchi | | Steve | Veldwijk | | Steve | Vickson | +------------+-----------+ 3 rows in set (0.00 sec) mysql> show status like 'Handler%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | Handler_commit | 1 | | Handler_delete | 0 | | Handler_discover | 0 | | Handler_external_lock | 2 | | Handler_mrr_init | 0 | | Handler_prepare | 0 | | Handler_read_first | 0 | | Handler_read_key | 1 | | Handler_read_last | 0 | | Handler_read_next | 8 | | Handler_read_prev | 0 | | Handler_read_rnd | 0 | | Handler_read_rnd_next | 0 | | Handler_rollback | 0 | | Handler_savepoint | 0 | | Handler_savepoint_rollback | 0 | | Handler_update | 0 | | Handler_write | 0 | +----------------------------+-------+ 18 rows in set (0.00 sec)

As you see, the Handler_*  variables are same for both indexes. The reason for this is that in this case the optimizer can use index condition pushdown optimization:

mysql> explain format=json SELECT first_name, last_name FROM employees WHERE first_name='Steve' and last_name like 'V%' and hire_date > '1990-01-01'G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "12.21" }, "table": { "table_name": "employees", "access_type": "range", "possible_keys": [ "comp" ], "key": "comp", "used_key_parts": [ "first_name", "last_name" ], "key_length": "34", "rows_examined_per_scan": 8, "rows_produced_per_join": 2, "filtered": "33.33", "index_condition": "((`employees`.`employees`.`first_name` = 'Steve') and (`employees`.`employees`.`last_name` like 'V%'))", "cost_info": { "read_cost": "11.68", "eval_cost": "0.53", "prefix_cost": "12.21", "data_read_per_join": "127" }, "used_columns": [ "first_name", "last_name", "hire_date" ], "attached_condition": "(`employees`.`employees`.`hire_date` > '1990-01-01')" } } } 1 row in set, 1 warning (0.00 sec)

In the output above we don’t have a member:

"using_index": true,

But there is information about index condition:

"index_condition": "((`employees`.`employees`.`first_name` = 'Steve') and (`employees`.`employees`.`last_name` like 'V%'))",

However, query_cost  is higher in this case: 12.21 against 4.24 for the composite index of the three fields.

Conclusion: The used_key_parts field of the EXPLAIN FORMAT=JSON  output can help us to identify how effective our composite indexes are.

The post used_key_parts: EXPLAIN FORMAT=JSON provides insight into which part of multiple-column key is used appeared first on MySQL Performance Blog.

Categories: MySQL

Percona XtraBackup 2.3.3 is now available

MySQL Performance Blog - Thu, 2015-12-17 13:30

Percona is glad to announce the release of Percona XtraBackup 2.3.3 on December 17th, 2015. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

Bugs Fixed:

  • Database directories were not removed if DROP DATABASE happened during the backup. Bug fixed #1461735.
  • Backup would fail if Store backup history on the server feature was enabled and backup was taken from server without binary log enabled. Bug fixed #1509812.
  • Percona XtraBackup now fails with descriptive error message if --defaults-extra-file option is not specified first. Bug fixed #1511451.
  • Backup would fail if --rsync option was used without specifying temporary folder. Bug fixed #1511701.
  • Fixed Percona XtraBackup crash which happened when it was used for SST on MariaDB Galera Cluster caused by double free of datadir variable. Bug fixed #1512281.
  • --move-back option did not respect the innodb_log_group_home_dir and innodb_data_home_dir options which caused ib_logfiles and data files not to be moved back to correct location. Bug fixed #1512616.
  • xtrabackup_binlog_info was not updated correctly when applying incremental backups. Bug fixed #1523687.
  • When using a --defaults-file option Percona XtraBackup would complain about datadir being mismatched if it wasn’t explicitly set in the defaults file. Bug fixed #1508448.
  • Fixed build issues by adding missing check in cmake script for xxd presence. Bug fixed #1511267.
  • Percona XtraBackup would terminate backup process without error if --slave-info option was used on a server not running as a replication slave. Bug fixed #1513520.
  • innobackupex when used with --stream option would create an empty directory with a timestamp as a name. Bug fixed #1520569.

Other bugs fixed #1523728 and #1507238.

Release notes with all the bugfixes for Percona XtraBackup 2.3.3 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona XtraBackup 2.3.3 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Using dbsake to recover table structure from .frm files and process mysqldump output

MySQL Performance Blog - Wed, 2015-12-16 20:34

We work on data recoveries quite often. In many cases, we recover table structures from the .frm files because there was no backup available. There is already a great blog post by my colleague Miguel Ángel Nieto about how we can recover structures from .frm files using MySQL utilities.

This works pretty well and we prefer to run mysqlfrm with the “–server” option to get all possible information from a .frm file. However, this option expects that MySQL is up and running so that mysqlfrm can spawn a new MySQL instance, and run the structure recovery there.

Recently I came across a tool that makes this job easier. The name of tool is dbsake, it’s a collection of command-line tools that perform various DBA related tasks for MySQL. In this blog, we will look at two very useful dbsake commands.

Installation is very easy and straightforward. It’s in an executable python zip archive with all dependencies included.

# curl -s http://get.dbsake.net > dbsake # chmod u+x dbsake # ./dbsake --version dbsake, version 2.1.0 9525896

Recovering table structures from MySQL .frm files with dbsake

To recover table structures using dbsake, you need to use the “dbsake frmdump” command, followed by the .frm file path. The frmdump command decodes the MySQL .frm file and provides a “CREATE TABLE” or “CREATE VIEW” statement in the output. The good thing is that it doesn’t require a running a MySQL server instance, and interprets the .frm file according to rules similar to the MySQL server.

Let’s see an example:

# ./dbsake frmdump /var/lib/mysql/sakila/staff.frm -- -- Table structure for table `staff` -- Created with MySQL Version 5.6.27 -- CREATE TABLE `staff` ( `staff_id` tinyint(3) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `address_id` smallint(5) unsigned NOT NULL, `picture` blob, `email` varchar(50) DEFAULT NULL, `store_id` tinyint(3) unsigned NOT NULL, `active` tinyint(1) NOT NULL DEFAULT '1', `username` varchar(16) NOT NULL, `password` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`staff_id`), KEY `idx_fk_store_id` (`store_id`), KEY `idx_fk_address_id` (`address_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

The result looks pretty good, and has recovered the character set and collation information as well. We can also see the MySQL version (5.6.27) retrieved from the .frm file. It is important to mention that the command only decodes the information available in .frm file, which means that it cannot recover InnoDB foreign-key references and AUTO_INCREMENT values. These items are stored outside of the .frm file.

The frmdump command makes the recovery process easy and faster. We can easily script this and recover the structure of a large number of tables. For example, if we need to recover the structure of all tables from a world database, we can do following:

mysql> create database world_recover; # for tbl in `ls -1 /var/lib/mysql/world/*.frm`; do ./dbsake frmdump $tbl | mysql world_recover; done; mysql> show tables from world_recover; +-------------------------+ | Tables_in_world_recover | +-------------------------+ | city | | country | | countrylanguage | +-------------------------+ 3 rows in set (0.00 sec)

Filter and transform a mysqldump stream with dbsake

It’s a very common requirement to filter one or more tables from a mysqldump full database backup. The “dbsake sieve [options]” command helps us to filter or transform mysqldump output.

Let’s see how to extract a single table from a mysqldump file.

# mysqldump world > world.sql # cat world.sql | ./dbsake sieve -t world.city > world.city.sql Processed . Output: 1 database(s) 1 table(s) and 0 view(s)

The “-t” or “–table” option tells the command to only output the table matching the given pattern. It will also show the number of databases, tables or views processed in output.

To extract multiple tables, you can pass “-t db.tbl” multiple times.

# cat world.sql | ./dbsake sieve -t world.city -t world.country > world.city_country.sql Processed . Output: 1 database(s) 2 table(s) and 0 view(s) # cat world.city_country.sql | grep -i 'create table' CREATE TABLE `city` ( CREATE TABLE `country` (

The latest Percona server added the new option “–innodb-optimize-keys” in mysqldump. It changes the way InnoDB tables are dumped, so that secondary keys are created after loading the data, thus taking advantage of InnoDB fast index creation. This is a really great feature in that it helps us to restore data more efficiently than the default incremental rebuild that mysqldump performs.

Using the “dbsake sieve [options]” command, we can transform the regular mysqldump output to take advantage of fast index creation. The “–defer-indexes” option rewrites the output of CREATE TABLE statements, and arranges for secondary indexes to be created after the table data is loaded. Similarly the “–defer-foreign-keys” option can be added to add foreign key constraints after loading table data.

Let’s see an example:

# cat world.sql | ./dbsake sieve --defer-indexes --defer-foreign-keys -t world.city > world.city.sql Processed . Output: 1 database(s) 1 table(s) and 0 view(s)

This means that world.city.sql will have a table structure with the Primary Key first, then will insert statements to load data, and an additional ALTER TABLE statement to create secondary keys when there is at least one secondary index to be added. Foreign keys will also created with secondary indexes.

The original structure of table world.city:

CREATE TABLE `city` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `Name` char(35) NOT NULL DEFAULT '', `CountryCode` char(3) NOT NULL DEFAULT '', `District` char(20) NOT NULL DEFAULT '', `Population` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`ID`), KEY `CountryCode` (`CountryCode`), CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `country` (`Code`) ) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1;

And the transformation done by dbsake:

........... CREATE TABLE `city` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `Name` char(35) NOT NULL DEFAULT '', `CountryCode` char(3) NOT NULL DEFAULT '', `District` char(20) NOT NULL DEFAULT '', `Population` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1; ........... LOCK TABLES `city` WRITE; ........... INSERT INTO `city` VALUES ..................... ........... UNLOCK TABLES; -- -- InnoDB Fast Index Creation (generated by dbsake) -- ALTER TABLE `city` ADD KEY `CountryCode` (`CountryCode`), ADD CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `country` (`Code`); ...........

For more sieve command options, please read the online manual.

Conclusion

The dbsake command makes it easier to recover table structures from .frm files. We can also filter and transform the mysqldump output easily without writing a complex awk or sed script. There are some more useful features of this tool that you can read about in the manual.

The post Using dbsake to recover table structure from .frm files and process mysqldump output appeared first on MySQL Performance Blog.

Categories: MySQL

Act Quickly for Percona Live Super Saver rate – Plus an added incentive!

MySQL Performance Blog - Tue, 2015-12-15 22:43

Our Super Saver discount rate for Percona Live 2016 is only available ‘til December 31st! This rate gets you all the excellent and amazing opportunities that Percona Live offers, at a discount.

The Percona Live Data Performance Conference is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

As an added incentive, we’ve got an even better offer in this blog! Complete the paragraph at the end of the post with the words we’ve provided, and paste it into the comments: that’s right, it’s mad DBA libs! Every mad DBA libs posted in the comments gets an extra $20 off their Percona Live tickets. To up the stakes even further, we’ll select one lucky poster from the comments to get $100 off each ticket (chosen randomly from the comments section).

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center. Sponsorship opportunities are also available and provide the opportunity to interact with more than 1,200 attendees and become part of a dynamic and growing ecosystem.

Guarantee your spot now to get the best value for your Percona Live experience. If you purchase a ticket by December 31st, it will only cost you $1399 – that’s nearly 20% off the full rate! And post the mad libs in the comments below for an additional $20 off, and the chance for $100 off!

DBA Libs:

Over time, the impact of complex ______ designs, usage, growth and upgrades can combine to put a drag on _______. Potential ______ killers are easy to _____when you are busy with daily ______ administration. Once these problems are discovered and corrected, you will see ______ improvements in ______ performance and resilience.

Word List:

  • Slinky
  • Annoying
  • Kiss(es)
  • Caress(es)
  • Hug(s)
  • Sexy
  • Frog(s)
  • Incoherent
  • Ferrari(s)
  • Parrot(s)
  • Parasol(s)
  • Cocktail(s)
  • Meeting(s)
  • Sultry
  • Dancing
  • Clumsy
  • Cuddly
  • Spicy
  • Steal
  • Laugh
  • Fuzzy
  • Grab
  • Embrace

The post Act Quickly for Percona Live Super Saver rate – Plus an added incentive! appeared first on MySQL Performance Blog.

Categories: MySQL

MongoDB 3.2 WiredTiger in iiBench

MySQL Performance Blog - Tue, 2015-12-15 18:30

MongoDB 3.2 was recently released with WiredTiger as the default storage engine.

In just over five years, MongoDB has evolved into a popular database. MongoDB 3.0 supported “pluggable storage engines.” The B-Tree-based WiredTiger should outperform IO-optimized RocksDB and PerconaFT in in-memory workloads, but it demonstrates performance degradation when we move into IO workloads.

There are reports that WiredTiger 3.2 comes with improved performance, so I ran a quick benchmark against WiredTiger 3.0.

My interest is in not only the absolute performance, but also how it performs during checkpointing. In my previous test we saw periodic drops:  https://www.percona.com/blog/2015/07/14/mongodb-benchmark-sysbench-mongodb-io-bound-workload-comparison/.

For this test I am using iiBench by Mark Callaghan: https://github.com/mdcallag/iibench-mongodb.

WiredTiger command line:

numactl --interleave=all ./mongod --dbpath=/mnt/i3600/mongo/ --storageEngine=wiredTiger --syncdelay=900 --wiredTigerCacheSizeGB=10 --wiredTigerJournalCompressor=none

Storage: Intel SSD DC P3600 SSD 1.6TB
Server: Bare Metal powered by Intel(R) Xeon(R) CPU E5-2680

For workload, I inserted 200 million records with 1 index.

The results (click on image to get higher resolution):

WiredTiger 3.2 is indeed faster. It finished in 31 min (compared to 51 min for WiredTiger 3.0).

However, the performance stalls are still there. There are periods of one minute or longer when WiredTiger refuses to process data.

The post MongoDB 3.2 WiredTiger in iiBench appeared first on MySQL Performance Blog.

Categories: MySQL

used_columns: EXPLAIN FORMAT=JSON tells when you should use covered indexes

MySQL Performance Blog - Mon, 2015-12-14 23:04

In the “MySQL Query tuning 101” video, Alexander Rubin provides an excellent example of when to use a covered index. On slide 25, he takes the query select name from City where CountryCode = ’USA’ and District = ’Alaska’ and population > 10000 and adds the index cov1(CountryCode, District, population, name) on table City. With Alex’s query tuning experience, making the right index decision is simple – but what about us mere mortals? If a query is more complicated, or simply uses more than one table, how do we know what to do? Maintaining another index can slow down INSERT statements, so you need to be very careful when choosing one. Examining the array “used_columns” could help out.

Let’s assume a more complicated version of the query was used in “MySQL Query tuning 101”:

select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.Name;

Can we use a covered index here?

A traditional text-based EXPLAIN  already shows that it is a pretty good plan:

mysql> explain select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.NameG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: Country partitions: NULL type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 239 filtered: 14.29 Extra: Using where; Using temporary; Using filesort *************************** 2. row *************************** id: 1 select_type: SIMPLE table: City partitions: NULL type: ref possible_keys: CountryCode key: CountryCode key_len: 3 ref: world.Country.Code rows: 18 filtered: 10.00 Extra: Using where *************************** 3. row *************************** id: 1 select_type: SIMPLE table: CountryLanguage partitions: NULL type: ref possible_keys: PRIMARY,CountryCode key: CountryCode key_len: 3 ref: world.Country.Code rows: 4 filtered: 100.00 Extra: Using index 3 rows in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `world`.`City`.`Name` AS `city`,`world`.`Country`.`Name` AS `country`,group_concat(`world`.`CountryLanguage`.`Language` separator ',') AS `group_concat(Language)` from `world`.`City` join `world`.`CountryLanguage` join `world`.`Country` where ((`world`.`City`.`District` = 'St George') and (`world`.`Country`.`Continent` = 'North America') and (`world`.`City`.`CountryCode` = `world`.`Country`.`Code`) and (`world`.`CountryLanguage`.`CountryCode` = `world`.`Country`.`Code`)) group by `world`.`City`.`Name`,`world`.`Country`.`Name`

Can we make it better? Since our topic is covered indexes, let’s consider this possibility.

EXPLAIN FORMAT=JSON will tell us to which columns we should add covered index:

mysql> explain format=json select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.NameG *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "927.92" }, <I skipped output for groupping operation and other tables here> { "table": { "table_name": "City", "access_type": "ref", "possible_keys": [ "CountryCode" ], "key": "CountryCode", "used_key_parts": [ "CountryCode" ], "key_length": "3", "ref": [ "world.Country.Code" ], "rows_examined_per_scan": 18, "rows_produced_per_join": 63, "filtered": "10.00", "cost_info": { "read_cost": "630.74", "eval_cost": "12.61", "prefix_cost": "810.68", "data_read_per_join": "4K" }, "used_columns": [ "ID", "Name", "CountryCode", "District" ], "attached_condition": "(`world`.`City`.`District` = 'St George')" } },

The answer is in the array “used_columns”. It lists the ID (primary key) and all columns which I used in the query:

"used_columns": [ "ID", "Name", "CountryCode", "District" ],

Now we can try adding a covered index:

mysql> alter table City add index cov(CountryCode, District, Name); Query OK, 0 rows affected (2.74 sec) Records: 0 Duplicates: 0 Warnings: 0

EXPLAIN  confirms what index access (“using_index”: true ) is used:

mysql> explain format=json select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.NameG *************************** 1. row *************************** EXPLAIN: {   "query_block": {     "select_id": 1,     "cost_info": {       "query_cost": "296.28"     },     <I skipped output for groupping operation and other tables here>         {           "table": {             "table_name": "City",             "access_type": "ref",             "possible_keys": [               "CountryCode",               "cov"             ],             "key": "cov",             "used_key_parts": [               "CountryCode",               "District"             ],             "key_length": "23",             "ref": [               "world.Country.Code",               "const"             ],             "rows_examined_per_scan": 2,             "rows_produced_per_join": 100,             "filtered": "100.00",             "using_index": true,             "cost_info": {               "read_cost": "34.65",               "eval_cost": "20.19",               "prefix_cost": "108.64",               "data_read_per_join": "7K"             },             "used_columns": [               "ID",               "Name",               "CountryCode",               "District"             ]           }         },

It also provides such metrics as:

  • query_cost – 296.28 for the indexed table against 927.92 (smaller is better)
  • rows_examined_per_scan – 2 versus 18 (smaller is better)
  • filtered – 100 versus 10 (bigger is better)
  • cost_info – read_cost  and prefix_cost  for the indexed table are smaller than when not indexed, which is better. However, eval_cost  and data_read_per_join  are bigger. But since we read nine times less rows overall, the cost is still better.

Conclusion: if the number of columns in used_columns  array is reasonably small, you can use it as a guide for creating a covered index.

 

The post used_columns: EXPLAIN FORMAT=JSON tells when you should use covered indexes appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server 5.7.10-1 first RC available

MySQL Performance Blog - Mon, 2015-12-14 21:36

Percona is glad to announce the first release candidate of Percona Server 5.7.10-1 on December 14, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

This release contains all the bug fixes from latest Percona Server 5.6 release (currently Percona Server 5.6.27-76.0).

New Features:

  • Percona Server 5.7.10-1 is not available on RHEL 5 family of Linux distributions and Debian 6 (squeeze).
  • Complete list of changes between Percona Server 5.6 and 5.7 can be seen in Changed in Percona Server 5.7.

Known Issues:

  • MeCab Full-Text Parser Plugin has not been included in this release.
  • PAM Authentication Plugin currently isn’t working correctly.
  • Variables innodb_show_verbose_locks and innodb_show_locks_help are not working correctly.
  • In Percona Server 5.7 super_read_only feature has been replaced with upstream implementation. There are currently two known issues compared to Percona Server 5.6 implementation:
    • Bug #78963, super_read_only aborts STOP SLAVE if variable relay_log_info_repository is set to TABLE which could lead to a server crash in Debug builds.
    • Bug #79328, super_read_only set as a server option has no effect.
  • Using primary key with a BLOB in TokuDB table could lead to a server crash (bug #916).
  • Using XA transactions with TokuDB could lead to a server crash (bug #900).
  • To install TokuDB on CentOS/RHEL 7 you’ll need to add the line LD_PRELOAD=/usr/lib64/libjemalloc.so.1 to /etc/sysconfig/mysql before running the ps_tokudb_admin script
  • Percona Tokubackup has not been included in this release.

Bugs Fixed:

  • Running ALTER TABLE without specifying the storage engine (without ENGINE= clause) or OPTIMIZE TABLE when enforce_storage_engine was enabled could lead to unrequested and unexpected storage engine changes. If done for a system table, it would circumvent regular system table storage engine compatibility checks, resulting in crashes or otherwise broken server operation. Bug fixed #1488055.
  • Some transaction deadlocks did not increase the INFORMATION_SCHEMA.INNODB_METRICS lock_deadlocks counter. Bug fixed #1466414 (upstream #77399).
  • Removed excessive locking during the buffer pool resize when checking whether AHI is enabled. Bug fixed #1525215 (upstream #78894).
  • Removed unnecessary code in InnoDB error monitor thread. Bug fixed #1521564 (upstream #79477).

Other bugs fixed: #371752 (upstream #45379), #1441362 (upstream #56155), #1385062 (upstream #74810), #1519201 (upstream #79391), #1515602, #1506697 (upstream #57552), #1501089 (upstream #75239), #1447527 (upstream #75368), #1384658 (upstream #74619), #1384656 (upstream #74584), and #1192052.

Release notes for Percona Server 5.7.10-1 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.7.10-1 first RC available appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server for MongoDB 3.0.7-1.0 is now available

MySQL Performance Blog - Mon, 2015-12-14 05:00

Percona is pleased to announce the GA release of Percona Server for MongoDB 3.0.7-1.0 on December 14, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.0 protocol and drivers. Based on MongoDB 3.0.7, it extends MongoDB with MongoRocks and PerconaFT storage engines, as well as features like external authentication and audit logging. Percona Server for MongoDB requires no changes to MongoDB applications or code.

 

NOTE: The MongoRocks engine included in this release is based on RocksDB 4.0.0. For more information, see the RocksDB changelog.

 

New Features:

  • Percona TokuBackup is an integrated open-source hot backup system for MongoDB servers running the PerconaFT storage engine (including Percona Server for MongoDB)
  • Concurrency optimization for lock-managed storage engines, such as PerconaFT. Pre-locking document ranges on index scans greatly improves performance of such engines.

Bug Fixes:

  • PSMDB-18: Changed the default data directory (dbpath) to /var/lib/mongo for RHEL/CentOS.
  • PSMDB-28: Fixed an Invariant failure when creating a sparse index with the WiredTiger storage engine.

Build and Packaging Fixes:

  • BLD-331: Fixed the service mongod stop command for Debian.
  • BLD-332: Fixed logic for percona-server-mongodb-helper.sh script to correctly handle Transparent Huge Pages.
  • BLD-343: Added the postrm script with appropriate logic to handle proper data files removal when running apt-get purge.
  • BLD-344: Switched to new YAML configuration format.
  • BLD-369: Changed the mongod installation path to /usr/bin.
  • Fixed the mongod system group creation command.

PerconaFT Fixes:

  • FT-456: Serialized FT fanout to disk
  • FT-675: Removed unnecessary assertion on setlocale() success in ctest suite
  • FT-681: Fixed transactional inconsistency with rollback
  • FT-682: Reduced the amount of malloc_usable_size() calls
  • FT-684: Implemented a logical row count mechanism to make the estimated number of rows reflect the actual number of rows in heavily used tables
  • FT-685: Introduced branch prediction macros
  • FT-688: Changed licensing and naming from Tokutek to Percona
  • FT-690: Fixed reserved_filenum overflow
  • FT-692: Added a dummy field to make sizeof(DB_LSN) equal in C and C++
  • FT-699: Added and replaced field initializers that caused build fails
  • FT-702: Replaced the ZERO_COND_INITIALIZER definition with an empty brace pair intializer to stop build fails
  • Fixed memory leak due to data race in context status initialization
  • Fixed memory leak found with address sanitizer
  • Fixed assert in circular buffer assert caused by race
  • Improved database creation time
  • Implemented grabbing of evictor mutex when signaling the evictor condition variable

Percona Server for MongoDB 3.0.7-1.0 release notes are available in the official documentation.

The post Percona Server for MongoDB 3.0.7-1.0 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

rows_examined_per_scan, rows_produced_per_join: EXPLAIN FORMAT=JSON answers on question “What number of filtered rows mean?”

MySQL Performance Blog - Thu, 2015-12-10 23:00

At the end of my talk “Troubleshooting MySQL Performance” at the LinuxPiter conference, a user asked me a question: “What does the EXPLAIN ‘filtered’ field mean, and how do I use it?” I explained that this is the percentage of rows that were actually needed, against the equal or bigger number of resolved rows. While the user was happy with the answer, I’d like to better illustrate this. And I can do it with help of EXPLAIN FORMAT=JSON and its rows_examined_per_scan, rows_produced_per_join  statistics.

Let’s take a simple query that searches information about the Russian Federation in the table Country  of the standard world database:

mysql> select * from Country where Name='Russian Federation'G *************************** 1. row *************************** Code: RUS Name: Russian Federation Continent: Europe Region: Eastern Europe SurfaceArea: 17075400.00 IndepYear: 1991 Population: 146934000 LifeExpectancy: 67.2 GNP: 276608.00 GNPOld: 442989.00 LocalName: Rossija GovernmentForm: Federal Republic HeadOfState: Vladimir Putin Capital: 3580 Code2: RU 1 row in set (0.00 sec)

It returns single row – but how many rows were actually used to resolve the query? EXPLAIN  will show us:

mysql> 56-explain select * from Country where Name='Russian Federation'; +----+-------------+---------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+---------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | Country | NULL | ALL | NULL | NULL | NULL | NULL | 239 | 10.00 | Using where | +----+-------------+---------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `world`.`Country`.`Code` AS `Code`,`world`.`Country`.`Name` AS `Name`,`world`.`Country`.`Continent` AS `Continent`,`world`.`Country`.`Region` AS `Region`,`world`.`Country`.`SurfaceArea` AS `SurfaceArea`,`world`.`Country`.`IndepYear` AS `IndepYear`,`world`.`Country`.`Population` AS `Population`,`world`.`Country`.`LifeExpectancy` AS `LifeExpectancy`,`world`.`Country`.`GNP` AS `GNP`,`world`.`Country`.`GNPOld` AS `GNPOld`,`world`.`Country`.`LocalName` AS `LocalName`,`world`.`Country`.`GovernmentForm` AS `GovernmentForm`,`world`.`Country`.`HeadOfState` AS `HeadOfState`,`world`.`Country`.`Capital` AS `Capital`,`world`.`Country`.`Code2` AS `Code2` from `world`.`Country` where (`world`.`Country`.`Name` = 'Russian Federation')

You see that 239 rows were examined, and 10% of them filtered. But what exactly was done? An explanation exists in the EXPLAIN FORMAT=JSON  output:

mysql> explain format=json select * from Country where Name='Russian Federation'G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "53.80" }, "table": { "table_name": "Country", "access_type": "ALL", "rows_examined_per_scan": 239, "rows_produced_per_join": 23, "filtered": "10.00", "cost_info": { "read_cost": "49.02", "eval_cost": "4.78", "prefix_cost": "53.80", "data_read_per_join": "6K" }, "used_columns": [ "Code", "Name", "Continent", "Region", "SurfaceArea", "IndepYear", "Population", "LifeExpectancy", "GNP", "GNPOld", "LocalName", "GovernmentForm", "HeadOfState", "Capital", "Code2" ], "attached_condition": "(`world`.`Country`.`Name` = 'Russian Federation')" } } } 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `world`.`Country`.`Code` AS `Code`,`world`.`Country`.`Name` AS `Name`,`world`.`Country`.`Continent` AS `Continent`,`world`.`Country`.`Region` AS `Region`,`world`.`Country`.`SurfaceArea` AS `SurfaceArea`,`world`.`Country`.`IndepYear` AS `IndepYear`,`world`.`Country`.`Population` AS `Population`,`world`.`Country`.`LifeExpectancy` AS `LifeExpectancy`,`world`.`Country`.`GNP` AS `GNP`,`world`.`Country`.`GNPOld` AS `GNPOld`,`world`.`Country`.`LocalName` AS `LocalName`,`world`.`Country`.`GovernmentForm` AS `GovernmentForm`,`world`.`Country`.`HeadOfState` AS `HeadOfState`,`world`.`Country`.`Capital` AS `Capital`,`world`.`Country`.`Code2` AS `Code2` from `world`.`Country` where (`world`.`Country`.`Name` = 'Russian Federation')

We are interested in this part:

"rows_examined_per_scan": 239, "rows_produced_per_join": 23, "filtered": "10.00",

It clearly shows that 239 rows were examined, but only 23 rows were used to produce the result. To make this query more effective we need to add an index on the Name field:

mysql> alter table Country add index(Name); Query OK, 0 rows affected (0.40 sec) Records: 0 Duplicates: 0 Warnings: 0

Now the EXPLAIN  plan is much better: we only examine 1 required row, and the value of filtered is 100%:

mysql> pager egrep 'rows_examined_per_scan|rows_produced_per_join|filtered'; PAGER set to 'egrep 'rows_examined_per_scan|rows_produced_per_join|filtered'' mysql> explain format=json select * from Country where Name='Russian Federation'G "rows_examined_per_scan": 1, "rows_produced_per_join": 1, "filtered": "100.00", 1 row in set, 1 warning (0.00 sec)

 

The post rows_examined_per_scan, rows_produced_per_join: EXPLAIN FORMAT=JSON answers on question “What number of filtered rows mean?” appeared first on MySQL Performance Blog.

Categories: MySQL

attached_condition: How EXPLAIN FORMAT=JSON can spell-check your queries

MySQL Performance Blog - Tue, 2015-12-08 18:49

When you work with complicated queries, especially ones which contain subqueries, it is easy to make a typo or misinterpret column name. While in many cases you will receive a column not found error, sometimes you can get strange results instead.

Like finding 4079 countries in Antarctica:

mysql> select count(*) from City where CountryCode in (select CountryCode from Country where Continent = 'Antarctica'); +----------+ | count(*) | +----------+ | 4079 | +----------+ 1 row in set (0.05 sec)

Or not finding any cities in Georgia:

mysql> select Name, Language from City join CountryLanguage using (CountryCode) where CountryCode in (select Code from Country where District='Georgia' and Continent='Asia'); Empty set (0.18 sec)

I used a standard world database for these examples.

The reason for both errors is misplacing the column names in the two tables. While it may look simple when you already know what is wrong, I’ve seen support tickets where users were puzzled by simple queries like these.

I always recommended my customers examine their queries, and use EXPLAIN EXTENDED, followed by SHOW WARNINGS. Then thoroughly examine resulting query.

For example, for the query result mentioned above, we can see something like this:

mysql> W Show warnings enabled. mysql> explain extended select count(*) from City where CountryCode in (select CountryCode from Country where Continent = 'Antarctica'); +----+--------------------+---------+-------+---------------+-------------+---------+------+------+----------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------------+---------+-------+---------------+-------------+---------+------+------+----------+--------------------------+ | 1 | PRIMARY | City | index | NULL | CountryCode | 3 | NULL | 4005 | 100.00 | Using where; Using index | | 2 | DEPENDENT SUBQUERY | Country | ALL | NULL | NULL | NULL | NULL | 227 | 100.00 | Using where | +----+--------------------+---------+-------+---------------+-------------+---------+------+------+----------+--------------------------+ 2 rows in set, 2 warnings (0.01 sec) Note (Code 1276): Field or reference 'world.City.CountryCode' of SELECT #2 was resolved in SELECT #1 Note (Code 1003): select count(0) AS `count(*)` from `world`.`City` where <in_optimizer>(`world`.`City`.`CountryCode`,<exists>(select 1 from `world`.`Country` where ((`world`.`Country`.`Continent` = 'Antarctica') and (<cache>(`world`.`City`.`CountryCode`) = `world`.`City`.`CountryCode`))))

Note (Code 1276): Field or reference 'world.City.CountryCode' of SELECT #2 was resolved in SELECT #1 tells us what is wrong, but without knowledge of the table definition it is hard to find out why SELECT #2 was resolved in SELECT #1. It also doesn’t give us much information if SELECT #1 uses more than one table in JOIN. For example,

mysql> explain extended select Name, Language from City join CountryLanguage using (CountryCode) where CountryCode in (select Code from Country where District='Georgia' and Continent='Asia'); +----+--------------------+-----------------+-----------------+---------------------+-------------+---------+-----------------------------------+------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------------+-----------------+-----------------+---------------------+-------------+---------+-----------------------------------+------+----------+-------------+ | 1 | PRIMARY | CountryLanguage | index | PRIMARY,CountryCode | CountryCode | 3 | NULL | 1157 | 100.00 | Using index | | 1 | PRIMARY | City | ref | CountryCode | CountryCode | 3 | world.CountryLanguage.CountryCode | 1 | 100.00 | Using where | | 2 | DEPENDENT SUBQUERY | Country | unique_subquery | PRIMARY | PRIMARY | 3 | func | 1 | 100.00 | Using where | +----+--------------------+-----------------+-----------------+---------------------+-------------+---------+-----------------------------------+------+----------+-------------+ 3 rows in set, 2 warnings (0.00 sec) Note (Code 1276): Field or reference 'world.City.District' of SELECT #2 was resolved in SELECT #1 Note (Code 1003): select `world`.`City`.`Name` AS `Name`,`world`.`CountryLanguage`.`Language` AS `Language` from `world`.`City` join `world`.`CountryLanguage` where ((`world`.`City`.`CountryCode` = `world`.`CountryLanguage`.`CountryCode`) and <in_optimizer>(`world`.`City`.`CountryCode`,<exists>(<primary_index_lookup>(<cache>(`world`.`City`.`CountryCode`) in Country on PRIMARY where ((`world`.`Country`.`Continent` = 'Asia') and (`world`.`City`.`District` = 'Georgia') and (<cache>(`world`.`City`.`CountryCode`) = `world`.`Country`.`Code`))))))

It is not clear if world.City.District  was resolved from table City or CountryLanguage .

EXPLAIN FORMAT=JSON gives us this information.

mysql> explain format=json select Name, Language from City join CountryLanguage using (CountryCode) where CountryCode in (select Code from Country where District='Georgia' and Continent='Asia')G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "907.97" }, "nested_loop": [ { "table": { "table_name": "Country", "access_type": "ALL", "possible_keys": [ "PRIMARY" ], "rows_examined_per_scan": 239, "rows_produced_per_join": 34, "filtered": "14.29", "cost_info": { "read_cost": "46.97", "eval_cost": "6.83", "prefix_cost": "53.80", "data_read_per_join": "8K" }, "used_columns": [ "Code", "Continent" ], "attached_condition": "(`world`.`Country`.`Continent` = 'Asia')" } }, { "table": { "table_name": "City", "access_type": "ref", "possible_keys": [ "CountryCode" ], "key": "CountryCode", "used_key_parts": [ "CountryCode" ], "key_length": "3", "ref": [ "world.Country.Code" ], "rows_examined_per_scan": 18, "rows_produced_per_join": 61, "filtered": "10.00", "cost_info": { "read_cost": "616.34", "eval_cost": "12.33", "prefix_cost": "793.40", "data_read_per_join": "4K" }, "used_columns": [ "Name", "CountryCode", "District" ], "attached_condition": "(`world`.`City`.`District` = 'Georgia')" } }, { "table": { "table_name": "CountryLanguage", "access_type": "ref", "possible_keys": [ "PRIMARY", "CountryCode" ], "key": "CountryCode", "used_key_parts": [ "CountryCode" ], "key_length": "3", "ref": [ "world.Country.Code" ], "rows_examined_per_scan": 4, "rows_produced_per_join": 260, "filtered": "100.00", "using_index": true, "cost_info": { "read_cost": "62.51", "eval_cost": "52.06", "prefix_cost": "907.97", "data_read_per_join": "10K" }, "used_columns": [ "CountryCode", "Language" ] } } ] } } 1 row in set, 2 warnings (0.00 sec) Note (Code 1276): Field or reference 'world.City.District' of SELECT #2 was resolved in SELECT #1 Note (Code 1003): /* select#1 */ select `world`.`City`.`Name` AS `Name`,`world`.`CountryLanguage`.`Language` AS `Language` from `world`.`Country` join `world`.`City` join `world`.`CountryLanguage` where ((`world`.`City`.`CountryCode` = `world`.`Country`.`Code`) and (`world`.`CountryLanguage`.`CountryCode` = `world`.`Country`.`Code`) and (`world`.`Country`.`Continent` = 'Asia') and (`world`.`City`.`District` = 'Georgia'))

All the details are in the attached_condition  element, which contains part of the WHERE clause, attached to this particular table.

Unfortunately the EXPLAIN output cannot be saved in a variable, which can then later be proceeded by JSON functions, but we can set a pager to filter the table_name  and attached_condition  keywords:

mysql> pager egrep 'table_name|attached_condition' PAGER set to 'egrep 'table_name|attached_condition'' mysql> explain format=json select Name, Language from City join CountryLanguage using (CountryCode) where CountryCode in (select Code from Country where District='Georgia' and Continent='Asia')G "table_name": "Country", "attached_condition": "(`world`.`Country`.`Continent` = 'Asia')" "table_name": "City", "attached_condition": "(`world`.`City`.`District` = 'Georgia')" "table_name": "CountryLanguage", 1 row in set, 2 warnings (0.00 sec)

Knowing that JSON structure is hierarchical,we clearly see what condition (`world`.`City`.`District` = 'Georgia') was resolved from table City  while condition (`world`.`Country`.`Continent` = 'Asia')  belongs to table Country.  We can conclude that table Country  probably does not have a field named District, while table City  does. The SHOW CREATE TABLE  statements confirm this guess:

mysql> show create table CountryG *************************** 1. row *************************** Table: Country Create Table: CREATE TABLE `Country` ( `Code` char(3) NOT NULL DEFAULT '', `Name` char(52) NOT NULL DEFAULT '', `Continent` enum('Asia','Europe','North America','Africa','Oceania','Antarctica','South America') NOT NULL DEFAULT 'Asia', `Region` char(26) NOT NULL DEFAULT '', `SurfaceArea` float(10,2) NOT NULL DEFAULT '0.00', `IndepYear` smallint(6) DEFAULT NULL, `Population` int(11) NOT NULL DEFAULT '0', `LifeExpectancy` float(3,1) DEFAULT NULL, `GNP` float(10,2) DEFAULT NULL, `GNPOld` float(10,2) DEFAULT NULL, `LocalName` char(45) NOT NULL DEFAULT '', `GovernmentForm` char(45) NOT NULL DEFAULT '', `HeadOfState` char(60) DEFAULT NULL, `Capital` int(11) DEFAULT NULL, `Code2` char(2) NOT NULL DEFAULT '', PRIMARY KEY (`Code`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.05 sec) mysql> show create table CityG *************************** 1. row *************************** Table: City Create Table: CREATE TABLE `City` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `Name` char(35) NOT NULL DEFAULT '', `CountryCode` char(3) NOT NULL DEFAULT '', `District` char(20) NOT NULL DEFAULT '', `Population` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`ID`), KEY `CountryCode` (`CountryCode`), CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `Country` (`Code`) ) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

This is the reason why the field District  was resolved from the table City, and not Country.

Conclusion: EXPLAIN FORMAT=JSON  can help you find if the column names are mixed up in the query, even without knowledge of the table definitions.

The post attached_condition: How EXPLAIN FORMAT=JSON can spell-check your queries appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server 5.6.27-76.0 is now available

MySQL Performance Blog - Fri, 2015-12-04 16:54

Percona is glad to announce the release of Percona Server 5.6.27-76.0 on December 4, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.27, including all the bug fixes in it, Percona Server 5.6.27-76.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.27-76.0 milestone on Launchpad.

New Features:

  • SHOW SLAVE STATUS NOLOCK syntax in 5.6 has been undeprecated. Both SHOW SLAVE STATUS NOLOCK and SHOW SLAVE STATUS NONBLOCKING are now supported. Percona Server originally used SHOW SLAVE STATUS NOLOCK syntax for this feature. As of 5.6.20-68.0 release, Percona Server implements SHOW SLAVE STATUS NONBLOCKING syntax, which comes from early MySQL 5.7. Current MySQL 5.7 does not have this syntax and regular SHOW SLAVE STATUS is non-blocking.
  • TokuDB tables can now be automatically analyzed in the background based on a measured change in data.
  • Percona Server has implemented new tokudb_strip_frm_data variable which can be used to assist in TokuDB data recovery. WARNING: Use this variable only if you know what you’re doing otherwise it could lead to data loss.

Bugs Fixed:

  • Setting the tokudb_backup_last_error_string and tokudb_backup_last_error values manually could cause server assertion. Bug fixed #1512464.
  • Fixed invalid memory accesses when mysqldump was running with --innodb-optimize-keys option. Bug fixed #1517444.
  • Fixed incorrect filename specified in storage/tokudb/PerconaFT/buildheader/CMakeLists.txt which could cause subsequent builds to fail. Bug fixed #1510085 (Sergei Golubchik).
  • Fixed multiple issues with TokuDB CMake scripts. Bugs fixed #1510092, #1509219 and #1510081 (Sergei Golubchik).
  • An upstream fix for upstream bug #76135 might cause server to stall or hang. Bug fixed #1519094 (upstream #79185).
  • ps_tokudb_admin now prevents Percona TokuBackup activation if there is no TokuDB storage engine on the server. Bug fixed #1520099.
  • Percona TokuBackup plugin now gets removed during the TokuDB storage engine uninstall process. Bug fixed #1520472.
  • New --defaults-file option has been implemented for ps_tokudb_admin to specify the MySQL configuration file if it’s not in the default location. Bug fixed #1517021.

Other bugs fixed: #1425480, #1517523, #1515741 and #1513178.

Release notes for Percona Server 5.6.27-76.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.6.27-76.0 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server 5.5.46-37.6 is now available

MySQL Performance Blog - Fri, 2015-12-04 16:52


Percona is glad to announce the release of Percona Server 5.5.46-37.6 on December 4, 2015. Based on MySQL 5.5.46, including all the bug fixes in it, Percona Server 5.5.46-37.6 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.46-37.6 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • An upstream fix for upstream bug #76135 might cause server to stall or hang. Bug fixed #1519094 (upstream #79185).
  • Fixed invalid memory accesses when mysqldump was running with --innodb-optimize-keys option. Bug fixed #1517444.

Other bugs fixed: #1517523.

Release notes for Percona Server 5.5.46-37.6 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.46-37.6 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Amazon Aurora in sysbench benchmarks

MySQL Performance Blog - Thu, 2015-12-03 17:22

In my previous post Amazon Aurora – Looking Deeper, I promised benchmark results on Amazon Aurora.

There are already some results available from Amazon itself: https://d0.awsstatic.com/product-marketing/Aurora/RDS_Aurora_Performance_Assessment_Benchmarking_v1-2.pdf.

There are also some from Marco Tusa: http://www.tusacentral.net/joomla/index.php/mysql-blogs/175-aws-aurora-benchmarking-blast-or-splash.html.

Amazon used quite a small dataset in their benchmark: 250 tables, with 25000 rows each, which in my calculation corresponds to 4.5GB worth of data. For this datasize, Amazon used r3.8xlarge instances, which provided 32 virtual CPUs and 244GB of memory. So I can’t say their benchmark is particularly illustrative, as all the data fits nicely into the available memory.

In my benchmark, I wanted to try different datasizes, and also compare Amazon Aurora with Percona Server 5.6 in identical cloud instances.

You can find my full report there: http://benchmark-docs.readthedocs.org/en/latest/benchmarks/aurora-sysbench-201511.html

Below is a short description of the benchmark:

Data Sizes

  • Initial dataset. 32 sysbench tables, 50 million (mln) rows each. It corresponds to about 400GB of data.
  • Testing sizes. For this benchmark, we vary the maximum amount of rows used by sysbench: 1mln, 2.5mln, 5mln, 10mln, 25mln, 50mln.

In the chart, the results are marked in thousands of rows: 1000, 2500, 5000, 10000, 25000, 50000. In other words, “1000” corresponds to 1mln rows.

Instance Sizes
It is actually very complicated to find an equal configuration (in both performance and price aspects) to use as a comparison between Percona Server running on an EC2 instance and Amazon Aurora.

Amazon Aurora:

  • db.r3.xlarge instance (4 virtual CPUS + 30GB memory)
  • Monthly computing cost (1-YEAR TERM, No Upfront): $277.40
  • Monthly storage cost: $0.100 per GB-month * 400 GB = $40
  • Extra $0.200 per 1 million IO requests

Total cost (per month, excluding extra per IO requests): $311.40

Percona Server:

  • r3.xlarge instance (4 virtual CPUS + 30GB memory)
  • Monthly computing cost (1-YEAR TERM, No Upfront): $160.60

For the storage we will use 3 options:

  • General purpose SSD volume (marked as “ps” in charts), 500GB size, 1500/3000 ios, cost: $0.10 per GB-month * 500 = $50
  • Provisioned IOPS SSD volume (marked as “ps-io3000”), 500GB, 3000 IOP = $0.125 per GB-month * 500 + $0.065 per provisioned IOPS-month * 3000 = $62.5 + $195 = $257.5
  • Provisioned IOPS SSD volume (marked as “ps-io2000”), 500GB, 2000 IOP = $0.125 per GB-month * 500 + $0.065 per provisioned IOPS-month * 2000 = $62.5 + $130 = $192.5

So corresponding total costs (per month) for used EC2 instances are: $210.60; $418.10; $353.10

Results

More graphs, including timelines, are available by the link http://benchmark-docs.readthedocs.org/en/latest/benchmarks/aurora-sysbench-201511.html

Summary results of Amazon Aurora vs Percona Server vs different datasizes:

Observations
There are few important points to highlight:

  • Even in long runs (2 hours) I didn’t see a fluctuation in results. The throughput is stable.
  • I actually made one run for 48 hours. There were still no fluctuations.
  • For Percona Server, as expected, better storage gives better throughput. 3000 IOPS is better then Amazon Aurora, especially for IO-heavy cases.
  • Amazon Aurora shows worse results with smaller datasizes. Aurora outperforms Percona Server (with general purpose SSD and provisioned SSD 2000IOPS volumes) when it comes to big datasizes.
  • It appears that Amazon Aurora does not benefit from adding extra memory – the throughput does not grow much with small datasizes. I think it proves my assumption that Aurora has some kind of write-through cache, which shows better results in IO-heavy workloads.
  • Provisioned IO volumes indeed give much better performance compared to general purpose volume, though they are more expensive.
  • From a cost consideration (compared to provisioned IO volumes) 3000 IOPS is more cost efficient (for this particular case, but in your workload it might be different) than 2000 IOPS, in the sense that it gives more throughput per dollar.

The post Amazon Aurora in sysbench benchmarks appeared first on MySQL Performance Blog.

Categories: MySQL

Upcoming Webinar: MySQL 5.7 in a Nutshell

MySQL Performance Blog - Wed, 2015-12-02 22:35

Join Alexander Rubin, Principal Consultant, Percona as he provides an overview of MySQL 5.7 features. Wednesday, December 9, 2015 10:00AM PST (GMT -08:00).

MySQL® 5.7 is a great release that has a lot to offer, especially in the areas of development and replication. It provides many new optimizer features for developers, a much more powerful GIS function, and a high performance JSON data type – allowing for a more powerful store for semi-structured data. It also features a dramatically improved Performance Schema and Parallel and Multi-Source replication – allowing you to scale much further than ever before.

Primary manageability enhancements within MySQL 5.7 include:

  • New Native JSON Data Type and JSON Functions: Allows for efficient and flexible storage, search, and manipulation of schema-less data.
  • Performance Schema: Enables instrumentation for memory, transactions, stored routines, prepared statements, replication, and locks.
  • MySQL SYS Schema: Provides helper objects that answer common performance, health, usage, and monitoring questions.
  • Improved Security: Delivers easier and safer instance initialization, setup and management.
  • Expanded Geographic Information System (GIS) Support for Mobile Applications: Provides spatial index support in InnoDB, GeoJSON, and GeoHash.

Key enhancements in MySQL 5.7 for greater performance and scalability include:

  • Enhanced Speed: In benchmark tests MySQL 5.7 delivered 1,600,000 queries per second (QPS) – 3x faster than MySQL 5.6.
  • Optimized InnoDB: New capabilities include increased performance and concurrency, enhanced on-line operations, spatial indexes, and native partitioning.
  • More Robust Replication: Enhancements to MySQL’s replication features include multi-source replication, enhanced Global Transaction Identifiers (GTIDs), and improved multi-threaded slaves for better scalability and availability.
  • Enhanced Optimizer: A new MySQL optimizer dynamic cost model provides better query performance and greater user control.

This webinar will be part of a 3-part series, which will include MySQL 5.7 for developers and MySQL 5.7 for DBAs.

Register here:

https://www.percona.com/resources/webinars/mysql-57-nutshell

The post Upcoming Webinar: MySQL 5.7 in a Nutshell appeared first on MySQL Performance Blog.

Categories: MySQL

Fixing errant transactions with mysqlslavetrx prior to a GTID failover

MySQL Performance Blog - Wed, 2015-12-02 18:05

Errant transactions are a major issue when using GTID replication. Although this isn’t something new, the drawbacks are more notorious with GTID than with regular replication.

The situation where errant transaction bites you is a common DBA task: Failover. Now that tools like MHA have support for GTID replication (starting from 0.56 version), this protocol is becoming more popular, and so are the issues with errant transactions. Luckily, the fix is as simple as injecting an empty transaction into the databases that lack the transaction. You can easily do this through the master, and it will be propagated to all the slaves.

Let’s consider the following situations:

  • What happens when the master blows up into the air and is out of the picture?
  • What happens when there’s not just one but dozens of errant transactions?
  • What happens when you have a high number of slaves?

Things start to become a little more complex.

A side note for the first case: when your master is no longer available, how can you find errant transactions? Well, you can’t. In this case, you should check for errant transactions between your slaves and your former slave/soon-to-be master.

Let’s think alternatives. What’s the workaround of injecting empty transactions for every single errant transaction to every single slave? The MySQL utility mysqlslavetrx. Basically, this utility allows us to skip multiple transactions on multiple slaves in a single step.

One way to install the MySQL utilities is by executing the following steps:

  • wget http://dev.mysql.com/get/Downloads/MySQLGUITools/mysql-utilities-1.6.2.tar.gz
  • tar -xvzf mysql-utilities-1.6.2.tar.gz
  • cd mysql-utilities-1.6.2
  • python ./setup.py build
  • sudo python ./setup.py install

And you’re ready.

What about some examples? Let’s say we have a Master/Slave server with GTID replication, current status as follows:

mysql> show master status; +------------------+----------+--------------+------------------+------------------------------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+------------------------------------------+ | mysql-bin.000002 | 530 | | | 66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2 | +------------------+----------+--------------+------------------+------------------------------------------+ 1 row in set (0.00 sec) mysql> show slave statusG ... Executed_Gtid_Set: 66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2 Auto_Position: 1 1 row in set (0.00 sec)

Add chaos to the slave in form of a new schema:

mysql> create database percona; Query OK, 1 row affected (0.00 sec)

Now we have an errant transaction!!!!!

The slave status looks different:

mysql> show slave statusG ... Executed_Gtid_Set: 66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2, 674a625e-976e-11e5-a8fb-125cab082fc3:1 Auto_Position: 1 1 row in set (0.00 sec)

By using the GTID_SUBSET function we can confirm that things go from “all right” to “no-good”:

Before:

mysql> select gtid_subset('66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2','66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2') as is_subset; +-----------+ | is_subset | +-----------+ | 1 | +-----------+ 1 row in set (0.00 sec)

After:

mysql> select gtid_subset('66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2,674a625e-976e-11e5-a8fb-125cab082fc3:1','66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2') as is_subset; +-----------+ | is_subset | +-----------+ | 0 | +-----------+ 1 row in set (0.00 sec)

All right, it’s a mess, got it. What’s the errant transaction? The GTID_SUBTRACT function will tell us:

mysql> select gtid_subtract('66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2,674a625e-976e-11e5-a8fb-125cab082fc3:1','66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2') as errand; +----------------------------------------+ | errand | +----------------------------------------+ | 674a625e-976e-11e5-a8fb-125cab082fc3:1 | +----------------------------------------+ 1 row in set (0.00 sec)

The classic way to fix this is by injecting an empty transaction:

mysql> SET GTID_NEXT='674a625e-976e-11e5-a8fb-125cab082fc3:1'; Query OK, 0 rows affected (0.00 sec) mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) mysql> SET GTID_NEXT='AUTOMATIC'; Query OK, 0 rows affected (0.00 sec)

After this, the errant transaction won’t be errant anymore.

mysql> show master status; +------------------+----------+--------------+------------------+----------------------------------------------------------------------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+----------------------------------------------------------------------------------+ | mysql-bin.000002 | 715 | | | 66fbd3be-976e-11e5-a8fb-1256731a26b7:1-2, 674a625e-976e-11e5-a8fb-125cab082fc3:1 | +------------------+----------+--------------+------------------+----------------------------------------------------------------------------------+ 1 row in set (0.00 sec)

Okay, let’s add another slave to the mix. Now is the moment where the mysqlslavetrx utility becomes very handy.

What you need to know is:

  • The slave’s IP address
  • The GTID set

It will be simple to execute:

mysqlslavetrx --­­gtid­-set=6aa9a742­8284­11e5­a09b­12aac3869fc9:1­­ --verbose ­­--slaves=user:password@172.16.1.143:3306,user:password@172.16.1.144

The verbose output will look something this:

# GTID set to be skipped for each server: # ­- 172.16.1.143@3306: 6aa9a742­8284­11e5­a09b­12aac3869fc9:1 # ­- 172.16.1.144@3306: 6aa9a742­8284­11e5­a09b­12aac3869fc9:1 # # Injecting empty transactions for '172.16.1.143:3306'... # ­- 6aa9a742­8284­11e5­a09b­12aac3869fc9:1 # Injecting empty transactions for '172.16.1.144:3306'... # ­- 6aa9a742­8284­11e5­a09b­12aac3869fc9:1 # #...done. #

You can run mysqlslavetrx from anywhere (master or any slave). You just need to be sure that the user and password are valid, and have the SUPER privilege required to set the gtid_next variable.

As a summary: Take advantage of the MySQL utilities. In this particular case, mysqlslavetrx is extremely useful when using GTID replication and you want to perform a clean failover. It can be added as a pre-script for MHA failover (which supports GTID since the 0.56 version) or can be simply used to maintain consistency between master and slaves.

The post Fixing errant transactions with mysqlslavetrx prior to a GTID failover appeared first on MySQL Performance Blog.

Categories: MySQL

How to upgrade your master server with minimal downtime

MySQL Performance Blog - Tue, 2015-12-01 17:31

Here’s a step-by-step guide on how to invert roles for master and slave so you can perform a master server upgrade, and then switch roles back to the original setup.

* While following this guide consider server-A as your original master and server-B as your original slave. We will assume server-B already produces binlogs and that both nodes have log-slave-updates=1 in my.cnf


Check this following link for more details on log-slave-updates:
https://dev.mysql.com/doc/refman/5.6/en/replication-options-slave.html#option_mysqld_log-slave-updates

1. Prepare the original Slave to invert roles.

Make sure the slave is not in read-only mode. This should be set to 0, if not change it:

server-B> SELECT @@global.read_only; server-B> SET GLOBAL read_only=0

server-B> SET GLOBAL read_only=0


* For critical service, you might also want to make sure the slave is actually a perfect replica by running a checksum of the tables with pt-table-checksum.

 

Check SHOW SLAVE STATUSG  on server-B until you see Seconds_Behind_Master is zero

server-B> SHOW SLAVE STATUSG

 

When server-B is fully caught up, then issue FLUSH TABLES WITH READ LOCK;  in a mysql client prompt in server-A

At this point your users will suffer a service request interruption and downtime starts ticking.

* DO NOT CLOSE this mysql client; otherwise the lock will be lost. We’ll use this same session to run the rest of the commands in server-A.

server-A> FLUSH TABLES WITH READ LOCK;

 

Get master coordinates:

server-A> SHOW MASTER STATUS;

 

Run SHOW SLAVE STATUSG  in server-B
Repeat until Relay_Master_Log_File  and Exec_Master_Log_Pos  matches file and position from the previous step.

server-B> SHOW SLAVE STATUSG

 

Check if SHOW MASTER STATUS;  is not changing on server-B (to ensure that there are no queries local to server-B), then stop the slave

server-B> SHOW MASTER STATUS; server-B> STOP SLAVE; server-B> RESET SLAVE ALL;

 

Run SHOW MASTER STATUSG  in server-B and save this information in a safe place.

Also make sure binlogs from that position and onwards are kept until you bring server-A back online. Otherwise you’ll need to rebuild server-A.

server-B> SHOW MASTER STATUSG


2. Reverse roles for Master – Slave and upgrade original Master

Direct traffic to server-B (point VIP to server-B, or through DNS -not really recommended-, or by updating the configuration files of API services).

In a separate shell session in server-A, stop mysql

Now server-A can be shut down and serviced

* At this point you’re safe to upgrade server-A.

* After you’ve finished upgrading your server you can continue this guide to put server-A back as master.

 

Start MySQL in server-A (with skip-slave-start!) and run RESET SLAVE ALL;

server-A> RESET SLAVE ALL;

 

Reconfigure slave in A with CHANGE MASTER TO ...  and start it
Make sure to put the same
MASTER_LOG_FILE  and MASTER_LOG_POS  values as you previously got from running SHOW MASTER STATUSG  on server-B.

 

server-A> CHANGE MASTER TO MASTER_HOST='<IP-of-server-B>', MASTER_USER='<your-replication-user>, MASTER_PASSWORD='<replication-user-password>', MASTER_LOG_FILE='<file-from-serverB>, MASTER_LOG_POS=<pos-from-serverB->; server-A> SET GLOBAL read_only=1; server-A> START SLAVE;

 

Make sure replication is running

Repeat until you see Seconds_Behind_Master  is 0.

server-A> SHOW SLAVE STATUSG

* Here is the moment where the roles are fully inverted, and A is an up-to-date slave of B.

 

3. Prepare the original Master to set back the original roles.

Prepare

When server-A is fully caught up, then issue FLUSH TABLES WITH READ LOCK;  in a mysql client prompt in server-B.

DO NOT CLOSE this mysql client on server-B; otherwise the lock will be lost. We’ll use this same session to run the rest of the commands in server-B.

server-B> FLUSH TABLES WITH READ LOCK;

* DO NOT CLOSE THIS CLIENT SESSION! *

 

Check master status

server-B> SHOW MASTER STATUS;

 

Run SHOW SLAVE STATUSG  in server-A. Repeat until Relay_Master_Log_File  and Exec_Master_Log_Pos  matches the file and position from SHOW MASTER STATUS;

server-A> SHOW SLAVE STATUSG

 

Get master coordinates:

server-A> SHOW MASTER STATUSG server-A> SET GLOBAL read_only=0;

4. Set back the original roles for Master – Slave

Direct traffic to server-A (point VIP to server-A, or through DNS -not really recommended-, or by updating the configuration files of API services).

In a separate shell session in server-B, restart mysql.

Make server-B again a slave of server-A:

server-B> CHANGE MASTER TO MASTER_HOST='<IP-of-server-A>', MASTER_USER='<your-replication-user>, MASTER_PASSWORD='<replication-user-password>', MASTER_LOG_FILE='<file-from-server-A>, MASTER_LOG_POS=<pos-from-server-A>; server-B> SET GLOBAL read_only=1; server-B> START SLAVE; server-B> SHOW SLAVE STATUSG

 

Stop the slave threads in server-A:

server-A> STOP SLAVE; server-A> RESET SLAVE ALL;

 

* If you have a chained replication setup you should take into consideration the log_slave_updates variable, especially as it is used in this case and should be enabled.
i.e.: In a replication chain A > B > C, for C to receive updates from A, B will need to log them to the binary logs and that is what this option does.

For more information regarding this variable, please check the following link:
https://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_log_slave_updates

The post How to upgrade your master server with minimal downtime appeared first on MySQL Performance Blog.

Categories: MySQL

Transporting tablespace from MySQL 5.6 to MySQL 5.7 (case study)

MySQL Performance Blog - Tue, 2015-12-01 07:32

Recently, I was working on a MySQL support ticket where a customer was facing an issue while transporting tablespace from MySQL 5.6 to MySQL 5.7.
After closely reviewing the situation, I saw that while importing tablespace they were getting errors such as:

ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x10 and the meta-data file has 0x1)

After some research, I found that there is a similar bug reported to MySQL for this issue (https://bugs.mysql.com/bug.php?id=76142), but no solution is mentioned. I tested the scenario locally and found a solution that I will detail in this post.

First, I created a table on MySQL 5.6

nilnandan.joshi@bm-support01:~$ mysql -uroot -p --socket=/tmp/mysql_sandbox5624.sock Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 35 Server version: 5.6.24 MySQL Community Server (GPL) ... mysql> create database nil; Query OK, 1 row affected (0.02 sec) mysql> use nil; Database changed mysql> create table nil(id int, name varchar(10)); Query OK, 0 rows affected (0.01 sec) mysql> insert into nil values (1, 'nilnandan'),(2, 'niljoshi'),(3, 'njoshi'),(4,'joshi'); Query OK, 4 rows affected (0.10 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql> select * from nil; +------+-----------+ | id | name | +------+-----------+ | 1 | nilnandan | | 2 | niljoshi | | 3 | njoshi | | 4 | joshi | +------+-----------+ 4 rows in set (0.02 sec)

Then took the backup for the nil database and exported it.

nilnandan.joshi@bm-support01:~$ innobackupex --defaults-file=/home/njoshi/sandboxes/msb_5_6_24/my.sandbox.cnf --user=root --password=msandbox --databases="nil" /home/njoshi/backup/ 151127 01:29:14 innobackupex: Starting the backup operation .... 151127 01:29:16 Backup created in directory '/home/njoshi/backup//2015-11-27_01-29-14' 151127 01:29:16 [00] Writing backup-my.cnf 151127 01:29:16 [00] ...done 151127 01:29:16 [00] Writing xtrabackup_info 151127 01:29:16 [00] ...done xtrabackup: Transaction log of lsn (1695477) to (1695477) was copied. 151127 01:29:16 completed OK! nilnandan.joshi@bm-support01:~$ nilnandan.joshi@bm-support01:~$ innobackupex --apply-log --export backup/2015-11-27_01-29-14 151127 01:32:25 innobackupex: Starting the apply-log operation ... xtrabackup: export option is specified. xtrabackup: export metadata of table 'nil/nil' to file `./nil/nil.exp` (1 indexes) xtrabackup: name=GEN_CLUST_INDEX, id.low=31, page=3 ... InnoDB: Starting shutdown... InnoDB: Shutdown completed; log sequence number 1706518 151127 01:32:28 completed OK! nilnandan.joshi@bm-support01:~$ nilnandan.joshi@bm-support01:~/backup/2015-11-27_01-29-14/nil$ ll total 140 drwx------ 2 nilnandan.joshi percona 4096 Nov 27 01:32 ./ drwx------ 3 nilnandan.joshi percona 4096 Nov 27 01:32 ../ -rw-r----- 1 nilnandan.joshi percona 65 Nov 27 01:29 db.opt -rw-r--r-- 1 nilnandan.joshi percona 421 Nov 27 01:32 nil.cfg -rw-r--r-- 1 nilnandan.joshi percona 16384 Nov 27 01:32 nil.exp -rw-r----- 1 nilnandan.joshi percona 8586 Nov 27 01:29 nil.frm -rw-r----- 1 nilnandan.joshi percona 98304 Nov 27 01:29 nil.ibd nilnandan.joshi@bm-support01:~/backup/2015-11-27_01-29-14/nil$

Now on MySQL 5.7, I create a nil table, discarded tablespace, copied the .cfg and .ibd files from backup to the datadir, and set proper permissions.

nilnandan.joshi@bm-support01:~/backup/2015-11-27_01-29-14/nil$ mysql -uroot -p --socket=/tmp/mysql_sandbox13454.sock Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 16 Server version: 5.7.9-log MySQL Community Server (GPL) ... mysql> create database nil; Query OK, 1 row affected (0.04 sec) mysql> use nil Database changed mysql> create table nil(id int, name varchar(10)); Query OK, 0 rows affected (0.04 sec) mysql> ALTER TABLE nil DISCARD TABLESPACE; Query OK, 0 rows affected (0.00 sec) mysql> quit Bye nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.cfg . nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.ibd . nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ mysql -uroot -p --socket=/tmp/mysql_sandbox13454.sock Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 17 Server version: 5.7.9-log MySQL Community Server (GPL) ... mysql> use nil Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +---------------+ | Tables_in_nil | +---------------+ | nil | +---------------+ 1 row in set (0.00 sec) mysql> ALTER TABLE nil IMPORT TABLESPACE; ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x5 and the meta-data file has 0x1)

After all that, I got the same error but I didn’t find any specific error message about the problem. When I deleted the .cfg file and tried again I got the exact error message.

mysql> ALTER TABLE nil IMPORT TABLESPACE; ERROR 1808 (HY000): Schema mismatch (Table has ROW_TYPE_DYNAMIC row format, .ibd file has ROW_TYPE_COMPACT row format.)

So, I dropped the table on MySQL 5.7, created the same table with the  “row_format=compact” option, copied both .cfg and .ibd files again, and this time it worked.

mysql> drop table nil; Query OK, 0 rows affected (0.00 sec) mysql> create table nil(id int, name varchar(10)) row_format=compact; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE nil DISCARD TABLESPACE; Query OK, 0 rows affected (0.00 sec) mysql> quit Bye nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.ibd . nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.cfg . mysql> ALTER TABLE nil IMPORT TABLESPACE; Query OK, 0 rows affected (0.00 sec) mysql> select * from nil; +------+-----------+ | id | name | +------+-----------+ | 1 | nilnandan | | 2 | niljoshi | | 3 | njoshi | | 4 | joshi | +------+-----------+ 4 rows in set (0.01 sec)

I’m not sure if the same issue occurs each time, but one thing is sure: removing the .cfg file gives you the exact cause of the problem and and lets you resolve it.

Why does moving tablespace from MySQL 5.6 to MySQL 5.7 give you this error? The answer is because the default innodb_file_format is changed in MySQL 5.7 from Antelope to Barracuda.

Important Change; InnoDB: The following changes were made to InnoDB configuration parameter default values:
  • The innodb_file_format default value was changed to Barracuda. The previous default value was Antelope. This change allows tables to use Compressed or Dynamic row formats.

https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-7.html

I’ve already submitted bug report to MySQL for this error and inappropriate error message.  https://bugs.mysql.com/bug.php?id=79469

BTW: here the the link for how we can transport tablespace with Xtrabackup.
https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/recipes_ibkx_partition.html

The post Transporting tablespace from MySQL 5.6 to MySQL 5.7 (case study) appeared first on MySQL Performance Blog.

Categories: MySQL

My proposals for Percona Live: Window Functions and ANALYZE for statements

Sergey Petrunia's blog - Mon, 2015-11-30 17:05

I’ve made two session proposals for Percona Live conference:

if you feel these talks are worth it, please vote!

Categories: MySQL
Syndicate content