MySQL

New mydumper 0.6.1 release offers performance and usability features

MySQL Performance Blog - Mon, 2014-03-10 16:00

One of the tasks within Percona Remote DBA is to ensure we have reliable backups with minimal impact. To accomplish this, one of the tools our team uses is called mydumper. We use mydumper for logical backups because of several nice features. Some of them are:

  • multithreaded, producing very fast backups compared to mysqldump
  • almost no locking, if not using non innodb tables
  • built-in compression
  • separate files for each table, making it easy to restore single tables or schema, and the possibility to hardlink files reducing the space needed to keep history of backups. Also this feature give you the possibility to restore with more than one thread.

The mydumper project was started at the beginning of 2009 by Domas Mituzas, and a few months ago we started collaborating and adding new features to improve performance and usability.

And now we are glad to announce our second release: mydumper 0.6.1. You can download the source code here. It is highly recommended to upgrade it if you are using 0.6.0 as we fixed and improved the new less-locking feature.

New features in the mydumper 0.6 series:

  • Consistent backups with less locking
    This new feature consists of locking all non-innodb tables with the dumping threads at the beginning so in this way we can unlock the flush tables with read lock earlier and no need to wait until all non-innodb tables were dumped. You can take advantage of this feature when you have large archive or myisam tables.
  • File size chunks
    Now you can split tables dump into different files with fixed size. This is usefull to reduce storage capacity needed to keep history backups by using hardlinks. Think on big “log” tables or tables where old data didnt change, now you will be able to hardlink back those chunks.
  • Metadata Locking
    Added new option –use-savepoints to reduce metadata locking issues while backup is running.

Fixed Bugs in the mydumper 0.6 series:

mydumper 0.6.0

  • #1250269 ensure statement is not bigger than statement_size
  • #827328 myloader to set UNIQUE_CHECKS = 0 when importing
  • #993714 Reducing the time spent with a global read lock
  • #1250271 make it more obvious when mydumper is not successful
  • #1250274 table doesnt exist should not be an error
  • #987344 Primary key name not quoted in showed_nulls test
  • #1075611 error when restoring tables with dashes in name
  • #1124106 Mydumper/myloader does not care for 0-s in AUTO_INCREMENT fields
  • #1125997 Timestamp data not portable between servers on differnt timezones

mydumper 0.6.1

  • #1273441 less-locking breaks consistent snapshot
  • #1267501 mydumper erroneously always attempts a dummy read
  • #1272310 main_connection keep an useless transaction opened and permit infinite metadata table lock
  • #1269376 mydumper 0.6.0 fails to compile “cast from pointer to integer of different size”
  • #1265155 create_main_connection use detected_server before setting it
  • #1267483 Build with MariaDB 10.x
  • #1272443 The dumping threads will hold metadata locks progressively while are dumping data.

Note: #1267501 fix is important for any galera cluster deployment, because of this bug #1265656 that was fixed and released in Percona XtraDB Cluster 5.6.15-25.3.

The post New mydumper 0.6.1 release offers performance and usability features appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server 5.5.36-34.0 is now available

MySQL Performance Blog - Mon, 2014-03-10 14:53

Percona Server version 5.5.36-34.0

Percona is glad to announce the release of Percona Server 5.5.36-34.0 on March 10th, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.5.36, including all the bug fixes in it, Percona Server 5.5.36-34.0 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.36-34.0 milestone at Launchpad.

New Features:

  • Debian and Ubuntu packaging has been reworked to meet the packaging standards.
  • Ability to change database for mysqlbinlog has been ported from Percona Server 5.1
  • Percona Server has implemented Slow Query Log Rotation and Expiration feature to provide users with better control of slow query log disk space usage.
  • In order to comply with Linux distribution packaging standards Percona‘s version of libmysqlclient has been renamed to libperconaserver. The old name was conflicting with the upstream libmysqlclient. Except for packaging, libmysqlclient and libperconaserverclient of the same version do not have any differences. Users wishing to continue using libmysqlclient will have to install the corresponding package of their distribution, such as mysql-lib for CentOS and libmysqlclient18 for Ubuntu/Debian. Users wishing to build software against libperconaserverclient should install libperconaserverclient-dev package. An old version of Percona-built libmysqlclient will be available for download.

Bugs Fixed:

  • The XtraDB version number in univ.i was incorrect. Bug fixed #1277383.
  • Percona Toolkit UDFs were only shipped with RPM packages. Bug fixed #1159625.
  • Server could crash if it was signaled with SIGHUP early in the server startup. Bug fixed #1249193 (upstream #62311).
  • Server could crash if XtraDB innodb_dict_size_limit option was set due to incorrect attempts to remove indexes in use from the dictionary cache. Bugs fixed #1250018 and #758788.
  • Fix for bug #1227581, a buffer pool mutex split regression, was not complete, thus a combination of write workload and tablespace drop could crash the server if InnoDB compression was used. Bug fixed #1269352.
  • Binary RPM packages couldn’t be built from source tarballs on Fedora 19. Bug fixed #1229598.
  • Percona Server that was compiled from source package had different server version string from that of binary packages. Bug fixed #1244178.
  • InnoDB did not handle the cases of asynchronous and synchronous I/O requests completing partially or being interrupted. Bugs fixed #1262500 (upstream #54430), and #1263087 (Andrew Gaul).
  • Fixed the CMake warnings that were happening when Makefile was generated. Bugs fixed #1274827 (upstream #71089).
  • Percona Server source tree has been reorganized to match the MySQL source tree layout closer. Bug fixed #1014477.
  • On Ubuntu Precise multiple architecture versions of libmysqlclient18 couldn’t be installed side by side. Bug fixed #1052636.

Other bugs fixed: #1005787.

Release notes for Percona Server 5.5.36-34.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.36-34.0 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Q&A: Common (but deadly) MySQL Development Mistakes

MySQL Performance Blog - Fri, 2014-03-07 11:00

On Wednesday I gave a presentation on “How to Avoid Common (but Deadly) MySQL Development Mistakes” for Percona MySQL Webinars. If you missed it, you can still register to view the recording and my slides.

Thanks to everyone who attended, and especially to folks who asked the great questions. I answered as many as we had time for during the session, but here are all the questions with my complete answers:

Q: Does a JOIN operation between two tables always produce an “access table” on the rows of the first table of the join, or it is possible to define an index (COVERING INDEX) to avoid this access to the first table?

Yes, if your query references only columns in the index, then MySQL can create the query result without having to access table rows. That’s the way a covering index gives benefit, by eliminating many page accesses. MySQL can get this optimization on any table in your query, even the first table in a JOIN.

Q: What are the performance drawbacks of prepared statements?

Some people have supposed that a prepared statement causes overhead, because it involves two communications between the application and the database server. First to send the SQL query with placeholders, and second to send parameter values. But unless you have a very slow network between your app and the database server, this isn’t a significant amount of overhead, and the benefits of using prepared statements usually outweigh the cost.

In fact, in a past blog post, Peter Zaitsev showed in a test that prepared statements allow for a significantly higher rate of queries per second than non-prepared queries.

I have also done tests with PHP, and I found that the overhead to response time is due more to additional PHP code than the performance on the database server side.

One more performance consequence of using prepared statements: compatibility with the query cache. If you issue SQL statements with the text protocol, i.e. using PREPARE and EXECUTE statements, then MySQL cannot use the query cache. But applications typically run dynamic SQL using the binary protocol, and that is compatible with the query cache.

Q: What about using fixed length text fields rather than VARCHARs?

CHAR would be an example of a fixed-length string data type. MyISAM tables have an optimization when all columns of a table use fixed-length data types. Then the row position in the data file is simply a multiple of the row size.

But InnoDB does not have this optimization. Rows are always located on pages, and pages are referenced by pointers rather than by computing an offset in the file. So there is no benefit to using fixed-length string data types.

Please don’t take this as a recommendation to use MyISAM. You should use InnoDB in virtually all cases. InnoDB has better performance that MyISAM in most workloads, but even more importantly it supports ACID behavior and has crash recovery. MyISAM is gradually being phased out.

Q: Advice on fixing some of the datatype or character set problems? 

You can use a query modifier like PROCEDURE ANALYSE to help you find out what is the optimal datatype for each of your columns.

The tricky part, though, is changing a data type on a table populated with a lot of data. Using ALTER TABLE may lock the table for the duration of restructuring it to change a data type, and that can interrupt your work. It takes time to do this, proportional to the size of data in the table and the number of indexes that need to be rebuilt.

MySQL 5.6 has new features for InnoDB to perform ALTER TABLE as online DDL. This can help in some cases to avoid locking your table. In the case of changing a data type, you can continue to read the table, but you can’t make changes while the ALTER TABLE is running.

Another solution is to use Percona Toolkit’s pt-online-schema-change. This free tool performs table alterations during which you can continue to read and write data.

Q: Can you talk about index cardinality vs. selectivity and cases where an index does not help because the number of pages read by the database does not decrease by using an index?

Selectivity is the ratio of the rows matched by a given query to the number of rows. Say for instance, I am looking for blog posts written by a given user. There are 1500 blog posts, and 15 written by me. That’s a selectivity of 15/1500, or 1%.

MySQL’s optimizer pays attention to selectivity, and will prefer not to use an index if the selectivity goes over 20%.

To understand this, think of an analogy to a book with an index at the back. Why doesn’t the index contain common words like “the”? Because a word like “the” would occur on virtually every page of the book, so its entry in the index would simply list every page. The list of pages itself could be pages long! If we require to use the index to find all those pages, it seems like a waste of time to flip back and forth, using the index to tell us where the next matching page is. It would be easier and more efficient to simply read the book cover-to-cover.

It is similar in a database index. Loading pages of index structures has some cost, just like loading pages of data rows. So there’s a selectivity breakpoint at which it becomes more efficient to simply do a table-scan. Hopefully MySQL makes the correct estimate of selectivity and where that breakpoint is, but occasionally it can get that wrong.

In those cases, you can override the optimizer by using Index Hints. But those cases are generally rare; its better to avoid using optimizer hints unless you are forced to.

Cardinality is an estimate of the number of unique values in an index. You can see it in the output of SHOW INDEX. A unique index by definition has cardinality equal to the number of rows. Cardinality doesn’t come up as often for the optimizer, except that a unique index on a PRIMARY or UNIQUE KEY allows MySQL to access the table with a different join type, that being const or eq_ref.

Q: The blog post said the partitioning and sharding topic will be touched. Unfortunately, it wasn’t, but I’d like to try and bring it up.

Apologies for that, I covered as many topics as I could during a one-hour webinar. I’ll plan to do another webinar in the future, and I’ll make sure to cover partitioning and sharding in more detail then. But on to your question…

Q: What would be the dataset size when you should think about sharding.

The decision to use sharding has little to do with dataset size. It has to do with the rate of traffic — especially write traffic. That is, use sharding when you’re trying to INSERT or UPDATE at such a rapid rate of queries per second that your database server can’t physically handle the rate of changes. Sharding allows us to scale out to multiple database servers, each of which will handle a fraction of the writes.

Q: And what would be the best choice middleware for the task? I.e. MySQL proxy is still alpha, so it’s not something you would put into production yet.

Right, MySQL Proxy has been available for years (version 0.7.0 is dated 2009-01-13 in launchpad.net), but curiously, its documentation still explicitly says it’s in alpha.

Oracle is currently working hard on a new solution to manage sharded architectures, it’s called MySQL Fabric. It’s still under development, but it’s available at MySQL Labs and you can download it and try it out. No doubt the developers would greatly value community feedback.

Another tool developed by Percona veteran Justin Swanhart is Shard-Query. This works as a proxy, and transparently maps your SQL statements into multiple queries, executed in parallel against individual shards in your cluster. Then it combines the results into a single result set. You can watch the recording of Justin’s past webinar about Shard-Query.

Q: Would you recommend moving to 5.6 (in production)?. The benchmark numbers looks amazing.

Yes, MySQL 5.6 demonstrates some great benchmarks, and Percona Server 5.6 adds even greater scalability. I would certainly recommend upgrading if you haven’t already.

When 5.6 first came out, there were a few high-profile bugs. But that’s true of every initial release of a new major version of any software. It pays to test carefully before any major upgrade, and to have a fallback plan. But it’s been a year since the first GA release of 5.6, and we have worked with many customers who have deployed it successfully in production and are very happy with the performance improvements.

Q: Would also love to hear what you think about sharding plus partitioning, and MySQL cluster.

I’ll go into more detail on sharding plus partitioning in a future webinar.

But let me say this about MySQL Cluster: the NDB storage engine is brilliant, it provides high availability at the same time as high performance, and internally I am told it’s one of the best pieces of code with the best test suite of any component of the MySQL family.

However: you should be aware that using any kind of sharded architecture requires that you design your database schema and your queries to match the way the data is sharded. Sharding works best when your queries need to access only one shard. For example, reading or writing just one row at a time.

If your queries involve range conditions, for instance, this is likely to require all shards to respond to the query, and that would gain you nothing in regards to performance or scalability. In fact, we’ve seen cases where a company migrated an existing application from a traditional MySQL instance to MySQL Cluster without designing their schema for sharding, and their performance degraded instead of improving.

Another example is if you do a JOIN between related rows, but the sharding by primary key has spread the related rows across multiple shards, then the JOIN could actually be more expensive than if you were running in a non-sharded environment. The fix in that case is to make sure your keys are defined such that the foreign key in related rows has the same modulus as the primary key of their parent row.

Q: You said that varchar columns stores efficiently on disk in data and indexes and in the buffer pool. But a string in the SQL layer pads out by using as much space as the maximum length. Increased 3x for utf8 or x4 for utf8mb4. I don’t understand what the “SQL Layer” means. Can you explain that?

The MySQL server has an architecture that divides its work between two main areas of code: the first part is storage-independent tasks like accepting connections, session management, authentication, SQL parsing, query optimization, logging, replication, the query cache, etc. We call this part of MySQL Server the “SQL layer” (or sometimes the “MySQL layer”).

The other part is the storage engine, which is responsible for reading and writing rows, some kinds of caching (like the buffer pool in the case of InnoDB), indexes, I/O, and so on.

The SQL layer knows how to parse queries and turns them into a series of individual row requests. Then the storage engine for a given table fulfills those row requests, reading them off disk or from the buffer pool as needed. As the storage engine returns rows, they are copied from one part of the code (the storage engine) to the other part (the SQL layer), and then prepares to return them as a query result set.

My point in the webinar is that the storage engine knows how to store VARCHAR data in a compact form, by storing variable-length strings. But the SQL layer treats rows as fixed-length records (except for BLOB/TEXT) and therefore allocates memory for VARCHAR strings according to the maximum length the column could be.

*  *  *

Thanks again for attending my webinar! Remember that you can still view the slides and the recording for “Common (but deadly) MySQL Development Mistakes.”

If you want even more tips and content about MySQL from the best experts in the community, you should go to the Percona Live MySQL Conference & Expo, April 1-4 in Santa Clara, California. Register here: http://www.percona.com/live/mysql-conference-2014/

The post Q&A: Common (but deadly) MySQL Development Mistakes appeared first on MySQL Performance Blog.

Categories: MySQL

Percona XtraBackup 2.1.8 is now available

MySQL Performance Blog - Thu, 2014-03-06 15:58

Percona is glad to announce the release of Percona XtraBackup 2.1.8 on March 6th, 2014. Downloads are available from our download site here and Percona Software Repositories.

This release is the current GA (Generally Available) stable release in the 2.1 series. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.

Bugs Fixed

  • Due to incorrect usage of posix_fadvise() hints, Percona XtraBackup discarded read-ahead buffers which resulted in higher I/O rate on the backup stage. Bug fixed #1093385.
  • Spurious trailing data blocks that would normally be ignored by InnoDB could lead to an assertion failure on the backup stage. Bug fixed #1177201.
  • A spurious warning message could cause issues with third-party wrapper scripts. Bug fixed #1271956.
  • xbcrypt could fail with the xbcrypt:xb_crypt_read_chunk: unable to read chunk iv size at offset error under some circumstances. Bug fixed #1273196.
  • xbstream could sometimes hang when extracting a broken or incomplete input stream. Bug fixed #1273207.
  • Preparing backups of MySQL or Percona Server 5.6 could raise an assertion failure in Percona XtraBackup. Bug fixed #1273468.
  • The downtime is decreased when the --safe-slave-backup option is used to backup a replication slave. The SQL thread is now started before a temporary copy of redo log is streamed into the final location. Bug fixed #1284778.
  • Disabled the “binary version check” functionality in the VersionCheck module due to security concerns. Bug fixed #1285166.

Other bugs fixed: #1284078.

Release notes with all the bugfixes for Percona XtraBackup 2.1.8 are available in our online documentation. All of Percona‘s software is open source and free, all the details of the release can be found in the 2.1.8 milestone at Launchpad. Bugs can be reported on the launchpad bug tracker.

The post Percona XtraBackup 2.1.8 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Many-table joins in MySQL 5.6

MySQL Performance Blog - Thu, 2014-03-06 14:37

I recently worked on an uncommon slow query: less than 100 rows were read and returned, the whole dataset was fitting in memory but the query took several seconds to run. Long story short: the query was a join involving 21 tables, running on MySQL 5.1. But by default MySQL 5.1 is not good at handling joins with such a large number of tables. The good news is that MySQL 5.6 brings welcome improvements.

Isolating the problem

As always with a slow query, finding the execution plan with EXPLAIN is the 1st step to understand where time is spent. Here the plan was very good with almost all joins using the primary key or a unique key, but perhaps the most interesting part was that EXPLAIN was very slow as well. This indicates that the optimizer takes a lot of time finding the optimal execution plan. To double check, we can run SHOW PROFILE:

mysql> set @@profiling = 1; mysql> SELECT ....; mysql> SHOW PROFILE; +--------------------+----------+ | Status | Duration | +--------------------+----------+ | starting | 0.000783 | | Opening tables | 0.000192 | | System lock | 0.000005 | | Table lock | 0.000010 | | init | 0.000369 | | optimizing | 0.000091 | | statistics | 3.459529 | | preparing | 0.000213 | | executing | 0.000005 | | Sending data | 0.004597 | | end | 0.000008 | | query end | 0.000005 | | freeing items | 0.000066 | | logging slow query | 0.000004 | | cleaning up | 0.000006 | +--------------------+----------+

‘Statistics’ means ‘finding a good execution plan’. Okay, we have found why the query is slow.

If you have read this post, you already know how to fix the problem: set optimizer_search_depth = 0 and response time instantly drops to 0.02s. Also mentioned in this post, we should expect MySQL 5.6 to be much better at handling such queries. Well, let’s give it a try!

MySQL 5.6

With the default value of optimizer_search_depth, response time is 0.02s in 5.6 and we can see in SHOW PROFILE that the time spent finding a good execution plan is very small:

+----------------------+----------+ | Status | Duration | +----------------------+----------+ | starting | 0.002022 | | checking permissions | 0.000018 | [...] | checking permissions | 0.000015 | | Opening tables | 0.000130 | | init | 0.001093 | | System lock | 0.000048 | | optimizing | 0.000200 | | statistics | 0.009404 | | preparing | 0.000432 | | executing | 0.000029 | | Sending data | 0.005409 | | end | 0.000027 | | query end | 0.000009 | | closing tables | 0.000034 | | freeing items | 0.000082 | | cleaning up | 0.000030 | +----------------------+----------+

However ‘Statistics’ is still the most expensive step. What if we set optimizer_search_depth = 0? It is even better:

[...] | statistics | 0.001896 | [...]

And the response time drops to 0.01s.

Conclusions
  • EXPLAIN is a great tool to understand how the server will run a query, but it does not tell you everything. Additional tools (here, SHOW PROFILES) are often useful as well.
  • By default MySQL 5.6 is much better than the previous versions at handling joins with many tables: great job optimizer team at Oracle!
  • However even with 5.6, you may get better response time with optimizer_search_depth = 0

The post Many-table joins in MySQL 5.6 appeared first on MySQL Performance Blog.

Categories: MySQL

Engineer duo from Google, LinkedIn join again for InnoDB talks

MySQL Performance Blog - Wed, 2014-03-05 15:12

Google senior systems engineer Jeremy Cole is once again teaming with LinkedIn senior software engineer Davi Arnaut for two InnoDB-focused sessions at the upcoming Percona Live MySQL Conference and Expo 2014 this April 1-4 in Santa Clara, California.

The duo will present “InnoDB: A journey to the core II” on April 2 and “InnoDB: A  hands-on exploration of on-disk storage with innodb-ruby” on April 4. Based on Jeremy’s InnoDB blog series, both sessions will be a continuation of last year’s Percona Live 2013 lecture, “InnoDB: A journey to the core” (the slides are available by following that link).

Tom: Jeremy and Davi, your sessions seem to be evolving based on the success of last year’s presentation. What prompted you both to expand that talk into two more this year?

Jeremy: Last year in “InnoDB: A journey to the core” we wanted to give a very thorough talk but we were pretty time limited. In private sessions of a similar nature, we usually allotted 2 hours for the talk and often went another hour with questions. We had to cram pretty hard to get everything we wanted into the session, and left out a lot of useful demos and digressions.

Davi: Apart from giving us greater time flexibility, it allows for an expansion and more comprehensive overview of the topics. Last year’s talk was focused on the on-disk structure of indexes and included only passing mentions of innodb_ruby. This time we will dig further into innodb_ruby and also cover the on-disk structures associated with logging and recovery in InnoDB.

Tom:  Why did you decide to focus on redo and undo logs in your April 2 session? And what specifically about them will you explore?

Jeremy: Largely because we completely left them out of last year’s talk for lack of time. Redo and undo logging is often misunderstood and sometimes even mixed up with binary logging. They aren’t so complicated though, and a better understanding of them would be very beneficial to many DBAs.

Davi: The redo and undo logs are essential components to a functioning database and central to ensuring data integrity, yet its inner workings are often overlooked. We aim to provide an in-depth overview of the internal structure and organization of the logs, which should help DBAs acquire a greater understanding what for and how these logs are used for things ranging from crash recovery to hot backup.

Tom: How did the April 4 session come about? And what will it focus on specifically?

Jeremy: I really wanted to give a demo of some of the amazing stuff that innodb_ruby can do. We’ve put a lot of work into it now, and it has an almost complete view of everything stored inside InnoDB. I think people can put it to a lot of good use, and I’d like to show them how. I also hope to have a little fun with the audience — wait and see!

Davi: The innodb_ruby tool has evolved greatly in the past months, deserving an increased prominence. The tool now offers a wide range of new features such as data dictionary support and cursors for searching or traversing indexes, which we hope will excite and challenge the audience.

Tom: Who should attend your two sessions and what do you want them to walk away with at the end of each?

Jeremy: Really, although the topics are pretty advanced, anyone who is using InnoDB in a serious capacity could probably learn something. I’d love them to leave with a better working knowledge of the mysteries of InnoDB. The next time they see a weird message in their log they’ll be able to say “I know what that actually means!” rather than “WTF?”.

Davi: Anyone looking to explore and improve their understanding and background knowledge of InnoDB. Attendees will likely be more informed in the future when talking about or dealing with InnoDB’s logging and recovery. Some will hopefully even become interested in contributing to innodb_ruby.

Tom: Do you have any homework for folks planning to attend them?

Jeremy: Reading my entire series of blog posts at blog.jcole.us/innodb would be a great start. Since innodb_ruby is open source, they could try installing it and playing around with it a bit (perhaps while following along with the blog posts).

Davi: Glancing over last year’s slides certainly helps in providing context and insight into the topics to be discussed.

Tom: How did you come up with the initial idea for this series on InnoDB for Percona Live?

Jeremy: Basically out of necessity. We did so much original research on InnoDB to build our understanding that we had to share it.

Davi: Exposing these topics to a wider audience can yield huge benefits to the community, and as the saying goes “given enough eyeballs, all bugs are shallow”.

Tom: How did you both come together on these sessions?

Jeremy: Davi and I worked together at Twitter and initially began our InnoDB exploration to solve some very specific problems. While we have both moved on from Twitter, we still very much enjoy working together, and we do on an almost daily basis. Aside from continuing to collaborate on our InnoDB explorations, we’re good friends.

Davi: We ran into a storage efficiency issue that required a closer inspection and knowledge of InnoDB’s storage format. At the time there was no way and no tool for inspecting and visualizing size and fragmentation information for table data and indexes, thus the need for research and development of innodb_ruby. Later, it allowed us to explore and identify some bugs, including bugs we were looking for and others we weren’t expecting.

Tom: Why innodb_ruby? Why Ruby?

Jeremy: Davi initially thought it (and by virtue, I) was crazy trying to implement InnoDB data structures in Ruby, but happily it has turned out to have been (in our opinion) a very good idea. In the end it doesn’t matter that it was Ruby — it could’ve been Python, Perl, PHP, or even Javascript. The key was that it be done in a language substantially different than InnoDB itself, and ideally much easier to develop in than C or C++. This forced us to really understand the InnoDB code to re-implement the data structures, which is always a very valuable learning experience.

Initially Davi tried the obvious thing: hacking something together using the actual InnoDB code. At the same time, I wrote an InnoDB page reader in C++ from scratch. They both proved difficult to develop and slow to make progress on, which also made them a lot less fun to work on. So I (very) quickly hacked together a copy of my page reader in Ruby and it has since morphed into the full innodb_ruby project.

Additionally, against some odds, Ruby has proven to be a very efficient (for the programmer — but horribly inefficient at run time) and flexible language for working with InnoDB data.

Davi: Hum, I don’t mind Ruby… anything but JavaScript.

Tom: Are you guys planning a Part III for Percona Live 2015?

Jeremy: I hope so! It mostly depends on the audience’s appetite for it and our ongoing research. We’re also still working on refining the format and delivery of the material. And of course, MySQL is changing all the time.

Davi: Certainly, and perhaps even expanding to other (often unexplored) topics such as InnoDB locking internals. I wish there was a greater focus on internals related topics at the conference. We need more people with MySQL internals knowledge.

Tom: Jeremy, you’ve had an illustrious career up to this point – all related to MySQL. Prior to Google, you were an early employee at MySQL, organized and led the MySQL Geeks at Yahoo, and at Twitter you built and led the DBA team. How did you get interested in MySQL?

Jeremy: Illustrious? I don’t know. It’s been a lot of hard work, and mostly accidental. I owe most of my successes to my teammates, and I’ve made a lot of mistakes along the way. I originally got into MySQL by using it at a couple of web development startups. Of course I hit bugs, I saw typos in the manual, I needed features that weren’t there.

I joined the mailing list, and at first started sending fixes (not even diffs!) for easy stuff, especially in the manual. Those fixes were often accepted quickly and I came up to speed on the proper Open Source etiquette of emailing diffs, and since this drastically reduced the work required by the recipient, I got a lot bolder with changes and sent bigger and bigger diffs. I eventually realized that working there would be fun and make things easier, so I asked Monty, and then the real adventure began.

Tom: Davi, same question to you.  Your career has also been immersed in MySQL. Prior to joining the MySQL development team at LinkedIn, you were a lead software engineer in the MySQL development teams at Oracle and also at Twitter, where I assume you worked with Jeremy. What is your MySQL story?

Davi: Right after college I was looking for a (open source related) job and MySQL had a few job openings at the time. After some interviews and fixing a couple of bugs, I was hired into the runtime team (responsible for locking, SQL parsing, prepared statements, stored procedures, etc) and started working on bug fixing for the then upcoming MySQL 5.1 release. This continued for a few more years as focus shifted to the 5.5 and 5.6 releases. I literally spent years fixing bugs in MySQL (such as infamous MySQL Bug #989) and this allowed me to gain intimate knowledge (and perhaps hatred) of various MySQL components.

Tom: Did either of you have a specific mentor who made a difference in your career path?

Jeremy: I don’t think I’ve had a single overarching mentor, but rather a series of them who all contributed a small part. I wrote up my story and named a few names on my blog when I was awarded the MySQL Community Contributor of the Year 2013 but overall I try to learn what I can from everyone I encounter. In recent years I have learned an awful lot from (and with!) Davi.

Davi: My current career path is in large part thanks to the great people I worked with at MySQL. They were and have been awesome colleagues and friends that taught me the nuances of working on MySQL. Working with Jeremy in particular has been a great experience.

Tom: What other Percona Live sessions are you interested in attending?

Jeremy: Aside from actual sessions, one of the things I look forward to most each year the social aspects: This conference is the only place where nearly every other member of the MySQL community comes together all at once. I am excited about Oracle’s participation and their sessions about MySQL 5.7, and the many talks from those who have deployed MySQL 5.6 in production.

Davi: Sessions from the Twitter and Facebook teams are a must as usual and I’m eager to hear more about the new developments happening in Oracle MySQL and MariaDB. The session about how GitHub manages MySQL also sounds interesting.

Tom: Jeremy and Davi, thanks very much for your time and I look forward to seeing you in April!

Jeremy: Thanks for the opportunity! I am definitely looking forward to the conference, too!

Davi: It’s my pleasure.

The post Engineer duo from Google, LinkedIn join again for InnoDB talks appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Toolkit 2.1.11 is now available

MySQL Performance Blog - Tue, 2014-03-04 16:11

Percona is glad to announce the release of Percona Toolkit 2.1.11 on March 4th, 2013 (Downloads are available here).

Bugs Fixed:
Fixed bug 1279502: –version-check behaves like spyware

The configuration for what information Percona Toolkit should retrieve is not hard-coded in the scripts. Instead, it is downloaded from v.percona.com. One of the possible parameters was the name of a binary file to execute with “–version.” This was fixed by removing the ability to execute arbitrary binary files with the version parameter. You can find more information about the benefits of “version check” in the launch post for Percona Toolkit 2.2.
Fixed bug 1199589: pt-archiver deletes data despite –dry-run

In some cases pt-archiver would delete the data despite the –dry-run option.

All of Percona‘s software is open source and free, all the details of the release can be found in the Percona Toolkit 2.1.11 milestone at Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

Percona Toolkit for MySQL is a collection of advanced command-line tools used by Percona MySQL Support staff to perform a variety of MySQL server and system tasks that are too difficult or complex to perform manually, including:

- Verify master and replica data consistency
- Efficiently archive rows
- Find duplicate indexes
- Summarize MySQL servers
- Analyze queries from logs and tcpdump
- Collect vital system information when problems occur

The post Percona Toolkit 2.1.11 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

Oracle’s Mats Kindahl to weave MySQL Fabric into Percona Live session

MySQL Performance Blog - Mon, 2014-03-03 15:11

Mats Kindahl of Oracle is lead developer of MySQL Fabric

MySQL Fabric is an integrated framework for managing farms of MySQL servers with support for both high-availability and sharding. Its development has been spearheaded by Mats Kindahl, senior principal software developer in MySQL at Oracle.

Mats is leading the MySQL Scaling and High-Availability effort covering the newly released MySQL Fabric and the MySQL Applier for Hadoop. He is also the architect and implementer of several features (mostly replication features), including the row-based replication available in 5.1 and the binary log group commit available in MySQL 5.6. Before starting MySQL he earned a doctoral degree in the area of automated verification of distributed systems and worked with implementation of C and C++ compilers.

He’ll be presenting at next month’s Percona Live MySQL Conference and Expo 2014 in Santa Clara, Calif. and said he was eager to speak with people following his session on the topic — especially those with real-life experience managing large farms. I had a conversation with Mats via Skype recently and here’s what he had to say.

Tom: Hi Mats, thanks for chatting with me today. Your session is titled “Sharding and Scale-out using MySQL Fabric.” What is MySQL Fabric and why did your team decide to create it?
Mats: MySQL Fabric is an extensible framework to manage large farms of MySQL servers. Currently we’ve added support for high-availability and sharding. That’s where we are right now.

MySQL has always had the “15 Minute Rule,” meaning that you should be able to install the software and have it up and running within that time frame. This aspect of ease of use is actually in my opinion part of the success of MySQL. It’s focused on being a flexible and versatile tool.

Given that we have this rule, it was easy to install and easy for people to get up and running on it. But now the trend is to use many servers in clusters — or what we like to call “farms” instead of “clusters” to avoid confusion with products such as MySQL Cluster, which is a different solution.

Using these large farms changed these rules to some extent and it’s not that easy to get up and running anymore if you have a lot of servers. It’s still not a problem with a single server or a few servers, but it becomes complicated once you start to add a large number of servers and try to change configuration and manage it dynamically.

What’s needed here is some kind of general tool for managing big farms of servers.  There are several tools available now in various places but each has a very special purpose focusing on a single aspect or a few aspects of managing a big system.

With MySQL Fabric, the focus is on building a framework focused on ease of use for managing and using large farms — treating them as one big distributed database: A federated database, in a manner of speaking.

The immediate goal with MySQL Fabric is to build a framework and implement some of the more critical features people need as extensions. We implemented one extension for handing high availability using the traditional master-slave setup — where you keep a single master and a number of secondaries as slaves. Once the master dies for some reason, the users have historically had to promote one of the slaves to be the new master — with MySQL Fabric, that is automatic.

In addition to failing over the slaves, you had to failover the application or applications too. And this is something that has not been fully automatic. What we have done is to make connectors an integral part of the entire framework, and that way when you are working with a MySQL Fabric farm the application doesn’t have to configure or manage failovers manually. The connectors will automatically realize something is wrong, connect to the MySQL Fabric to get information on what happened, and then preform the failover automatically, updating it’s information about the state of the farm in a way that is almost transparent to the application.

Tom: So then MySQL Fabric is designed to make MySQL much easier to work with – especially for large, data-intense installations.
Mats: Yes. Basically we saw that MySQL was difficult to use in larger setups. And there were no easy to use, flexible tools available that actually could handle what MySQL Fabric does. Most other tools are either special purpose or very limited in being able to manage specific aspects of managing large farms.

There are tools for sharding, there are tools for high availability – there are tools for other things. We want to have something that can actually be used as a framework and can be extended, so that anybody who wants to add special purposes for their setup will be able to add this to the Fabric to manage any specific needs they have.

We couldn’t find anything before like that so we decided to focus on building something that could.

Tom: Is MySQL Fabric free?
Mats: It is available to the community now. Exactly like MySQL Server, MySQL Fabric is open source. It is licensed under version 2 of the GNU General Public License (the “GPL”). And you can currently download it from dev.mysql.com where you’ll also find documentation.

Tom: When was MySQL Fabric’s first release?
Mats: We first released it publically last autumn at MySQL Connect and released another version in December just before Christmas with some new additions and some new features.

Tom: How many users would you say there are now?
Mats: As far as we know, no one have deployed it but we know that people are experimenting with it. We haven’t released it as a product yet — it’s in a pre-GA state. I’d call it a beta right now and we are working on the public bug reports

Tom: Your session at Percona Live next month is going to focus on MySQL Fabric – what can attendees expect and what will they walk away with afterward?
Mats: I’ll basically be talking about the architecture of Fabric, and that of course is centered around how you normally build a big system. You’ll also see how to use MySQL Fabric as a tool to setup a new farm to use for both high availability and for sharding.

Tom: Who should be taking a closer look now at MySQL Fabric? And who perhaps can wait a bit more?
Mats: We have focused on getting a system that can manage very large farms – so we are focusing on trying to eliminate so-called middleman approaches using proxies to increase performance. You probably won’t need anything like this if you have a simple setup with one server, or static site with one server and a few other servers that replicate for backups or reporting. But once you start adding servers or move around the servers more dynamically, then this is a tool that will be very useful. It is also valuable for users who have a small set of servers that they want to add high-availability to, since it simplifies administration.

Tom: What is the learning curve?
Mats: The focus is to be easy to use. People should be able to use it even for small setups. But it is architected in such a way that you should be able to use it even for very large setups, too.

Tom: Aside from your session, what are you interested in attending at next month’s conference?
Mats: I’m quite interested in anything related to high-availability and scalability — especially real-life experience from people managing large farms. There is a lot of value in seeing and understanding what kind of problems people run into in practice. Because it’s never what you see in a theoretical presentation on high-availability — how you manage it. It’s always the details that make a difference.

Two sessions in particular: One is Chip’s session (Chip Turner of Facebook) on asynchronous MySQL, which looks very interesting. And also, given the new improvements of replication in MySQL 5.6, Yoshinori’s (Yoshinori Matsunobu) session on Facebook’s usage of 5.6 is probably going to be very interesting.

There are also several sessions on sharding. Tumblr has one on their use of Jetpants that also looks interesting. I’ve attended that one before but it’s always interesting to talk with them and ask questions.

Additionally I saw that Justin (Swanhart) has a presentation on using Shard-Query, which looks very interesting.

This conference always provides a very diverse audience of users and developers. You always see something new or learn of something that’s coming down the road. And you are also able to talk with people about their real experiences — both good and bad –with using MySQL. So it’s useful for me as a developer to learn about the good parts, because it’s encouraging, but also the bad parts, so we can fix them.

Tom: Mats, thank you very much for speaking with me today. I look forward to meeting you next month in Santa Clara!
Mats: Thank you Tom! It was very nice speaking to you and I look forward to meeting you, and of course all the MySQL developers and users, at the conference next month.

 * * *

 The Percona Live MySQL Conference and Expo 2014 runs April 1-4 in Santa Clara, Calif. Use the code “SeeMeSpeak” on the registration page and save 10 percent.

The post Oracle’s Mats Kindahl to weave MySQL Fabric into Percona Live session appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Server 5.6.16-64.0 with TokuDB engine now available

MySQL Performance Blog - Mon, 2014-03-03 14:30

Percona Server version 5.6.16-64.0 with TokuDB engine

Percona is glad to announce the first ALPHA release of Percona Server 5.6.16-64.0 with TokuDB engine on March 3rd, 2014. Downloads are available here and from the Percona Software Repositories.

Based on Percona Server 5.6.16-64.0 including all the features and bug fixes in it, and on TokuDB 7.1.5-rc.3, Percona Server 5.6.16-64.0-tokudb is the first ALPHA release in the Percona Server 5.6 with TokuDB engine series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.6.16-64.0 milestone at Launchpad.

Percona Server now supports TokuDB storage engine. More information on how to install and use TokuDB can be found in the documentation. This feature is currently considered ALPHA quality.

Available TokuDB features:

Currently available ALPHA features:

Getting Started Guide can be downloaded here, and the complete documentation in PDF format is available here.

We did our best to eliminate bugs and problems during the testing this release, but this is a software, so bugs are expected. If you encounter them, please report them to our bug tracking system.

The post Percona Server 5.6.16-64.0 with TokuDB engine now available appeared first on MySQL Performance Blog.

Categories: MySQL

How to avoid common (but deadly) MySQL development mistakes

MySQL Performance Blog - Fri, 2014-02-28 20:11

MySQL software developers are under pressure to do more in less time and create applications that adapt to ever-changing requirements. And it’s true that some of a developer’s time is wasted when his or her method of optimizing involves trying every combination of code and index definition.

There is a better way. And next Wednesday at 10 a.m. Pacific time, I’ll show you how. In my webinar, “How to Avoid Common (but Deadly) MySQL Development Mistakes,” you’ll lean:

  • How to find out which indexes are the best fit for your MySQL application
  • How to protect yourself from the number one MySQL database security vulnerability on the web
  • How to decide when to optimize a MySQL database application with denormalization, indexes, caching, partitioning, sharding

You’ll step away from your computer afterward being more confident and ultimately more productive as you develop MySQL database-driven applications. Register now to reserve your spot (it’s free). The same link will give you access to the recording later.

The post How to avoid common (but deadly) MySQL development mistakes appeared first on MySQL Performance Blog.

Categories: MySQL

How rows_sent can be more than rows_examined?

MySQL Performance Blog - Fri, 2014-02-28 11:00

When looking at queries that are candidates for optimization I often recommend that people look at rows_sent and rows_examined values as available in the slow query log (as well as some other places). If rows_examined is by far larger than rows_sent, say 100 larger, then the query is a great candidate for optimization. Optimization could be as simple as adding a few indexes or much more complicated as in generating summary tables so you do not need to rely on large aggregations for your real-time queries.

Just to be clear this is a great rule for your “real time” queries need to be handled very quickly and in high volumes. Batch jobs, reporting queries often will have to scan through millions of rows to get few rows of result set and it is fine.

So it is all clear with rows_sent being smaller than rows_examined but can it be smaller ? Yes it can! Here are couple of things you need to be aware in this regard:

Query served without Execution
As strange as this might sound it is very much possible in MySQL. Some queries can be completely resolved during the “optimization” stage so there is no need to really execute a query by accessing the tables. You will see “Select tables optimized away” in the query EXPLAIN in this case:

explain select count(comment_count) from wp_posts; +----+-------------+---------------------------+-----------------------------+ | id | select_type | table,type,possible_keys, | Extra | | | | key,key_len,ref,rows | | +----+-------------+---------------------------+-----------------------------+ | 1 | SIMPLE | all NULLs | Select tables optimized away| +----+-------------+---------------------------+-----------------------------+ 1 row in set (0.00 sec)

Typically you will see this when having count(*) with MyISAM tables as in the example above as well as using MIN/MAX aggregate functions on the indexed column.

Reading data from the Index
Observing such behavior I’ve seen people thinking it could be due to using “Covering Index.” MySQL is reading data from the index in this case, not rows, right? Wrong! For the sake of counting this variable MySQL does not make a difference whenever it could read the data from the index alone or “data” had to be consulted. In both cases it will be seen as row which has been examined.

Index Condition Pushdown
Index Condition Pushdown really breaks things here as when index entries are filtered on storage engine level they are not counted towards row_examined, at least in MySQL 5.6. This is unfortunate as it may make queries using Index Condition Pushdown to look less expensive than they actually are. In some cases possibly 100x or more less expensive when a Index Condition Pushdown is used to filter out large portion of the rows. I wish storage engine could track how many rows it examines through Index Condition Pushdown and how many it filters out so rows_examined could account it properly.

Query Cache
Query Cache is another interesting case as any query cache hit will have rows_examined and rows_sent being zero. It might be confusing at first – one could expect the rows_examined be zero as no tables are being read while rows_sent have an actual number because result set is sent to the client. What happens in reality is – Query Cache caches the result set packets as they are sent to the client and they are sent again on query cache hit without any parsing so server does not know how many rows are being sent. I wish number of rows sent would be stored together with query result and as such be available in the log files, though I do not expect it ever fixed as Query Cache is something which is not getting a lot of attention this way.

Summary
Well… you can get rows_examined less than rows_sent but I only expect to see it for some rare kind of queries. There are also different cases you need to be aware of which may cause rows_sent or rows_examined not to be represented of the actual processing query requires or size of actual result sets.

The post How rows_sent can be more than rows_examined? appeared first on MySQL Performance Blog.

Categories: MySQL

DBA 101: Sometimes forgotten functionality in the MySQL client

MySQL Performance Blog - Thu, 2014-02-27 08:00

The MySQL client has some functionalities some of us never use. Why would you use them and what is the added value of this?

Every DBA and developer has had a moment when he or she needs to connect to a MySQL database using the command line tool. Therefore I’ve written down an explanation of some command line commands you can insert in the CLI, most of them give added value and make your experience with the cli more enjoyable.

prompt

Who has never witnessed the scary feeling of not being connected to the write database when having several terminals open. I do, due to the fact I use the prompt functionality.

mysql >\R Production > PROMPT set to 'Production > '

Or you can go a bit further and visualise the user, host and active database in:

mysql > \R \u@\h [\d]> PROMPT set to '\u@\h [\d]>' root@testbox [test]>

edit

In some situations editing the query in an editor instead of the cli can have several enhancements. It gives you the ability to fix typos, have a deep look at the queries before you submit them and etc.
If you’d like to edit the query you are making in your default editor instead of using the cli.

mysql> \e

The editor appears, in which you can create your query/ies.

use sakila; select * from city limit 10; ~ ~ ~

After closing this down and putting a delimiter in the cli, this query will be run against the database while outputting in the prompt.

mysql> \e -> ; +---------+--------------------+------------+---------------------+ | city_id | city | country_id | last_update | +---------+--------------------+------------+---------------------+ | 1 | A Corua (La Corua) | 87 | 2006-02-15 04:45:25 | | 2 | Abha | 82 | 2006-02-15 04:45:25 | | 3 | Abu Dhabi | 101 | 2006-02-15 04:45:25 | | 4 | Acua | 60 | 2006-02-15 04:45:25 | | 5 | Adana | 97 | 2006-02-15 04:45:25 | | 6 | Addis Abeba | 31 | 2006-02-15 04:45:25 | | 7 | Aden | 107 | 2006-02-15 04:45:25 | | 8 | Adoni | 44 | 2006-02-15 04:45:25 | | 9 | Ahmadnagar | 44 | 2006-02-15 04:45:25 | | 10 | Akishima | 50 | 2006-02-15 04:45:25 | +---------+--------------------+------------+---------------------+ 10 rows in set (0.03 sec)

tee

Performing critical maintenance on a database could require to have an entire log of performed queries and actions. You can activate the full output of the MySQL client, including your performed queries. This utility is ideal if you prefer having a log of all of your actions. This could be for documentation stakes or a way to reread your actions if issues would occur.

Example:

mysql> \T /tmp/tee.log Logging to file '/tmp/tee.log'

This will provide in the output the queries you perform.

dim0@testing101:~$ cat /tmp/tee.log mysql> select * from city limit 5; +---------+--------------------+------------+---------------------+ | city_id | city | country_id | last_update | +---------+--------------------+------------+---------------------+ | 1 | A Corua (La Corua) | 87 | 2006-02-15 04:45:25 | | 2 | Abha | 82 | 2006-02-15 04:45:25 | | 3 | Abu Dhabi | 101 | 2006-02-15 04:45:25 | | 4 | Acua | 60 | 2006-02-15 04:45:25 | | 5 | Adana | 97 | 2006-02-15 04:45:25 | +---------+--------------------+------------+---------------------+ 5 rows in set (0.00 sec)

Whenever, you have only access to the MySQL interface and you need to access one of your created files to see what the output is, you can do so using the \! keystroke, which will execute system commands.
Running a shell command from the MySQL command line interface:

mysql> \! cat /tmp/tee.log mysql> select * from city limit 5; +---------+--------------------+------------+---------------------+ | city_id | city | country_id | last_update | +---------+--------------------+------------+---------------------+ | 1 | A Corua (La Corua) | 87 | 2006-02-15 04:45:25 | | 2 | Abha | 82 | 2006-02-15 04:45:25 | | 3 | Abu Dhabi | 101 | 2006-02-15 04:45:25 | | 4 | Acua | 60 | 2006-02-15 04:45:25 | | 5 | Adana | 97 | 2006-02-15 04:45:25 | +---------+--------------------+------------+---------------------+ 5 rows in set (0.00 sec)

status

In some cases you’d like seeing the parameters currently active on your MySQL client. Therefore you can actually use the \s command. This command will clarify which of the options are active on the client. The info which is shown should not be confused with SHOW VARIABLES. which is focussed on the connection variables.
Trigger the status information of your connection using \s:

mysql> \s> -------------- mysql Ver 14.14 Distrib 5.6.15, for Linux (x86_64) using EditLine wrapper Connection id: 13149 Current database: sakila Current user: root@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.6.15-63.0-log Percona Server (GPL), Release 63.0 Protocol version: 10 Connection: 127.0.0.1 via TCP/IP Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 TCP port: 3306 Uptime: 10 days 23 hours 32 min 57 sec Threads: 1 Questions: 1203169 Slow queries: 43745 Opens: 626 Flush tables: 1 Open tables: 178 Queries per second avg: 1.268 --------------

clear

Clearing your current input query. Use \c to clear the input field:

mysql> SELECT * -> FROM city -> \c mysql>

pager

Honestly one of the more useful tools in the mysqlclient is actually pager. For people prone to typing queries while forgetting adding a limit if they don’t need to view the full output.
‘\P less’ will output the query data using the UNIX command less.
You can also choose to output the query results in a parsable format on the filesystem using ‘\P cat > voila.log’.

for example:

mysql> \P cat > /tmp/voila.log PAGER set to 'cat > /tmp/voila.log' mysql> SELECT * FROM city; 600 rows in set (0.01 sec)

This will create the file ‘voila.log’ in which only the output is saved of the query. This solution mimics the ‘SELECT INTO OUTFILE’ query.

During optimisation of your workload, it can be interesting to see if a query you’ve modified has the same query output

mysql >\P md5sum PAGER set to 'md5sum' root@127.0.0.1 [sakila]>select * from city limit 10; 449d5bcae6e0e5b19e7101478934a7e6 - 10 rows in set (0.00 sec) mysql >select city_id, city, country_id, last_update FROM city LIMIT 10 ; 449d5bcae6e0e5b19e7101478934a7e6 - 10 rows in set (0.00 sec)

Other commands

There are of course several other options you might use. You can get an overview of all the internal cli functions through \?.

Disabling the tee and pager commands described in this blogentry can be done with \t or notee, \n or \nopager.

The post DBA 101: Sometimes forgotten functionality in the MySQL client appeared first on MySQL Performance Blog.

Categories: MySQL

How to monitor ALTER TABLE progress in MySQL

MySQL Performance Blog - Wed, 2014-02-26 08:00

While working on a recent support issue as a Percona Support Engineer,  I got one question from a customer asking how to monitor ALTER TABLE progress. Actually, for MySQL 5.5 and prior versions, it’s quite difficult to ALTER the table in a running production environment especially for large tables (with millions records). Because it will rebuild and lock the table affecting the performance as well as our users. Therefore even if we start ALTER it’s really important to know when it will finish. Even while creating the index, ALTER TABLE will not rebuild the table if fast_index_creation is ON but still it might lock the table.

fast_index_creation feature was introduced in MySQL 5.5 and higher versions. Also available in MySQL 5.1 with the InnoDB Plugin

From MySQL 5.6 and later, “Online DDL” feature has been introduced, which is enhancing many other types of ALTER TABLE operations to avoid “copying the table” and “locking.” It  also allows SELECT, INSERT, UPDATE, and DELETE statements to run while the table is being altered. So in latest version, we can control the copy of file and locking by using ALGORITHM and LOCK options. But even in MySQL 5.6, there are some ALTER operations which still required table rebuild i.e ADD/DROP column, change data type, ADD/DROP primary key etc. You can check table here for more details.

Summary of Online Status for DDL Operations”  http://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html

So in any case, if required, we can check the ALTER TABLE progress with below solutions. 

One of the solutions is to use the Percona Toolkit utility, pt-online-schema-change which will ALTER the table without locking it for long time and show the progress. i.e

nilnandan@Dell-XPS:~$ pt-online-schema-change --alter "ADD COLUMN phone INT" u=root,p=root,D=nil,t=nil_test --execute No slaves found. See --recursion-method if host Dell-XPS has slaves. Not checking slave lag because no slaves were found and --check-slave-lag was not specified. Operation, tries, wait: copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `nil`.`nil_test`... Creating new table... Created new table nil._nil_test_new OK. Altering new table... Altered `nil`.`_nil_test_new` OK. 2014-02-07T12:20:54 Creating triggers... 2014-02-07T12:20:54 Created triggers OK. 2014-02-07T12:20:54 Copying approximately 12583349 rows... Copying `nil`.`nil_test`: 29% 01:12 remain Copying `nil`.`nil_test`: 60% 00:38 remain Copying `nil`.`nil_test`: 91% 00:08 remain 2014-02-07T12:22:33 Copied rows OK. 2014-02-07T12:22:33 Swapping tables... 2014-02-07T12:22:33 Swapped original and new tables OK. 2014-02-07T12:22:33 Dropping old table... 2014-02-07T12:22:33 Dropped old table `nil`.`_nil_test_old` OK. 2014-02-07T12:22:33 Dropping triggers... 2014-02-07T12:22:33 Dropped triggers OK. Successfully altered `nil`.`nil_test`. nilnandan@Dell-XPS:~$

While using pt-online-schema-change, many times I get questions such as: “What will happen to those data changes (INSERT/UPDATE/DELETE) which are performing during the ALTER TABLE?”

Here, I would like to explain briefly about it. While running pt-online-schema-change, if we will check data dir,

root@Dell-XPS:/var/lib/mysql/nil# ll total 830524 drwx------ 2 mysql mysql 4096 Feb 7 12:20 ./ drwxr-xr-x 5 mysql mysql 4096 Feb 7 12:05 ../ -rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt -rw-rw---- 1 mysql mysql 8616 Feb 7 12:06 nil_test.frm -rw-rw---- 1 mysql mysql 822083584 Feb 7 12:18 nil_test.ibd -rw-rw---- 1 mysql mysql 8648 Feb 7 12:20 _nil_test_new.frm -rw-rw---- 1 mysql mysql 28311552 Feb 7 12:20 _nil_test_new.ibd -rw-rw---- 1 mysql mysql 944 Feb 7 12:20 nil_test.TRG -rw-rw---- 1 mysql mysql 40 Feb 7 12:20 pt_osc_nil_nil_test_del.TRN -rw-rw---- 1 mysql mysql 40 Feb 7 12:20 pt_osc_nil_nil_test_ins.TRN -rw-rw---- 1 mysql mysql 40 Feb 7 12:20 pt_osc_nil_nil_test_upd.TRN

We can see that it will create triggers (separate trigger for INSERT, UPDATE and DELETE) on the original table to update corresponding rows to the new table( _nil_test_new). So any modifications happened to the data in original tables during the copy, will be reflected in the new table.

NOTE: This tool will not work if any triggers are already defined on the table.

But what if we don’t want to use pt-online-schema-change and run regular ALTER TABLE on mysql prompt? After some research I found many ways to calculate the progress of ALTER TABLE specially with innodb_file_per_table is ON. (innodb_file_per_table is ON by default in MySQL 5.6.)

  • Calculate the progress by checking the size of temporary tablespace. 

With innodb_file_per_table=1,when we are running ALTER TABLE, innodb creates a temporary tablespace within the same data directory with random name starting from #sql ending with .ibd like #sql-1c80_27.ibd. i.e

root@Dell-XPS:/var/lib/mysql/nil# ll                     ... -rw-rw---- 1 mysql mysql 8682 Feb 7 13:33 nil_test.frm -rw-rw---- 1 mysql mysql 335544320 Feb 7 13:34 nil_test.ibd -rw-rw---- 1 mysql mysql 8716 Feb 7 13:35 #sql-1c80_27.frm -rw-rw---- 1 mysql mysql 23068672 Feb 7 13:35 #sql-1c80_27.ibd

While altering table, innodb reads original ibd file like nil_test.ibd and writes new pages to #sql-1c80_27.ibd. So with file size of nil_test.ibd and temporary #sql-1c80_27.ibd , we can check the ALTER TABLE progress with something like, 

#!/bin/bash while true do A=`du -m '#sql'*'.ibd' 2>/dev/null|cut -f 1`; # if $A -lt 0 ; if [[ -z "$A" ]] ; then echo "Done"; exit 0 ; fi TABLENAME='nil_test'; TT=$TABLENAME.ibd; B=`du -m $TT |cut -f 1`; echo "ALTER TABLE $TABLENAME...$A MB written to tmp tablespace out of $B MB"; sleep 10 done

When we run ALTER on mysql, we can simply run this script in data dir and check the progress like,

mysql> ALTER TABLE nil_test ADD COLUMN phone int; Query OK, 7582912 rows affected (58.54 sec) Records: 7582912 Duplicates: 0 Warnings: 0 root@Dell-XPS:/var/lib/mysql/nil# ./alter_table_monitor.sh ALTER TABLE nil_test...23 MB written in tmp file out of 485 MB ALTER TABLE nil_test...73 MB written in tmp file out of 485 MB ALTER TABLE nil_test...121 MB written in tmp file out of 485 MB ALTER TABLE nil_test...173 MB written in tmp file out of 485 MB ALTER TABLE nil_test...225 MB written in tmp file out of 485 MB ALTER TABLE nil_test...277 MB written in tmp file out of 485 MB ALTER TABLE nil_test...333 MB written in tmp file out of 485 MB ALTER TABLE nil_test...381 MB written in tmp file out of 485 MB ALTER TABLE nil_test...433 MB written in tmp file out of 485 MB ALTER TABLE nil_test...481 MB written in tmp file out of 485 MB Done root@Dell-XPS:/var/lib/mysql/nil#

There are couple of things which we need to consider while using this script.

  1. We have to change script with $TABLENAME as per your requirement. It will work for only one ALTER TABLE.
  2. Script should run from database dir (i.e /var/lib/mysql/nil)
  3. This calculations are an approximation, because new table can be a bit different in size. For example, deleted rows or fragmentation can cause accuracy problems. 

I would like to mention here that we cannot check the progress of every ALTER TABLE with the above formula. For example, with fast_index_creation, It will create the table without any secondary indexes, then adding the secondary indexes after the data is loaded so it will not create temp tablespace (.ibd file) while creating secondary index. This process is very interesting, let me explain here.

When we add secondary index with ‘fast_index_creation‘, it will only create #sql***.frm file to update table structure but not temporary .ibd file. It will do some file sorts and then add directly index to original table.ibd file. so when you monitors those files, initially you didn’t see that file size increasing, but after some time (after files sorts) file size starts increasing till the end of ALTER TABLE. i.e

Initially, you’ll see

root@Dell-XPS:/var/lib/mysql/nil# ll total 409644 drwx------ 2 mysql mysql 4096 Feb 12 10:50 ./ drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../ -rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh* -rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt -rw-rw---- 1 mysql mysql 8750 Feb 12 10:26 nil_test.frm -rw-rw---- 1 mysql mysql 419430400 Feb 12 10:50 nil_test.ibd -rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 #sql-671_25.frm ... root@Dell-XPS:/var/lib/mysql/nil# ll total 409644 drwx------ 2 mysql mysql 4096 Feb 12 10:50 ./ drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../ -rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh* -rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt -rw-rw---- 1 mysql mysql 8750 Feb 12 10:26 nil_test.frm -rw-rw---- 1 mysql mysql 419430400 Feb 12 10:50 nil_test.ibd -rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 #sql-671_25.frm

Then suddenly .ibd file size starts increasing…till end.

root@Dell-XPS:/var/lib/mysql/nil# ll total 417836 drwx------ 2 mysql mysql 4096 Feb 12 10:50 ./ drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../ -rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh* -rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt -rw-rw---- 1 mysql mysql 8750 Feb 12 10:26 nil_test.frm -rw-rw---- 1 mysql mysql 427819008 Feb 12 10:50 nil_test.ibd -rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 #sql-671_25.frm .... root@Dell-XPS:/var/lib/mysql/nil# ll total 487456 drwx------ 2 mysql mysql 4096 Feb 12 10:51 ./ drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../ -rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh* -rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt -rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 nil_test.frm -rw-rw---- 1 mysql mysql 499122176 Feb 12 10:51 nil_test.ibd root@Nil-Dell-XPS:/var/lib/mysql/nil#

This will happen if there is only one secondary index. If there are multiple secondary indexes then for each index, process will pause, do file sorts and then add index so the number of pauses in file size increases, will be equal to number of secondary indexes.

  • Calculate the progress by checking the records in information_schema.GLOBAL_TEMPORARY_TABLES 

When the ALTER TABLE is running, we can also check GLOBAL_TEMPORARY_TABLES in information_schema and it will give you records count which are inserted in temporary table. i.e

mysql> select * from GLOBAL_TEMPORARY_TABLES \G; *************************** 1. row *************************** SESSION_ID: 38 TABLE_SCHEMA: nil TABLE_NAME: #sql-1c80_27 ENGINE: InnoDB NAME: #sql-696_26 TABLE_ROWS: 623711 AVG_ROW_LENGTH: 42 DATA_LENGTH: 26787840 INDEX_LENGTH: 0 CREATE_TIME: 2014-02-11 10:37:34 UPDATE_TIME: NULL 1 row in set (0.00 sec) ....... mysql> select * from GLOBAL_TEMPORARY_TABLES \G; *************************** 1. row *************************** SESSION_ID: 38 TABLE_SCHEMA: nil TABLE_NAME: #sql-1c80_27 ENGINE: InnoDB NAME: #sql-696_26 TABLE_ROWS: 7017408 AVG_ROW_LENGTH: 42 DATA_LENGTH: 299663360 INDEX_LENGTH: 0 CREATE_TIME: 2014-02-11 10:37:34 UPDATE_TIME: NULL 1 row in set (0.01 sec)

  •  Calculate the progress by checking the Handler_read_rnd_next status variable (global counters).  

While running ALTER TABLE, we can also check the handler_read_rnd_next status variable by “SHOW GLOBAL STATUS LIKE ‘Handler_read_rnd%’ OR mysqladmin extended. Check this 2008 post titled “How to estimate query completion time in MySQL” by Baron Schwartz. He describes it very well.

NOTE: This won’t need innodb_file_per_table = ON  but we can use this option only when there are no other parallel transactions running.

So normally in any case, it will become really difficult to find the ALTER TABLE progress unless MySQL itself will provide some feature like MariaDB is providing progress in “SHOW PROCESSLIST”  https://mariadb.com/kb/en/show-processlist/

I’ve installed MariaDB 5.5 locally and tried to check, along with “SHOW PROCESSLIST”, you can also check progress by running query against information_schema.processlist table. But I observed that progress time is different in both the output.  It looks like information_schema.processlist table gives accurate progress for ALTER TABLE.

[root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist" +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ | ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS | +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ | 2 | root | localhost | nil | 3 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 1 | 2 | 4.279 | +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ | 2 | root | localhost | nil | Query | 3 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 2.140 | | 29 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 | +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ [root@percona-pxc3 nil]# ............... ........... [root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist" +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ | ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS | +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ | 2 | root | localhost | nil | 25 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 1 | 2 | 45.613 | +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ | 2 | root | localhost | nil | Query | 25 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 22.807 | | 34 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 | +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ [root@percona-pxc3 nil]# .............. ......... [root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist" +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ | ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS | +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ | 2 | root | localhost | nil | 54 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 1 | 2 | 98.300 | +----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+ +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ | 2 | root | localhost | nil | Query | 54 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 49.157 | | 39 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 | +----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+ [root@percona-pxc3 nil]# [root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist" +----+------+-----------+------+------+-------+------+-------+-----------+----------+ | ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS | +----+------+-----------+------+------+-------+------+-------+-----------+----------+ | 2 | root | localhost | nil | 0 | | NULL | 0 | 0 | 0.000 | +----+------+-----------+------+------+-------+------+-------+-----------+----------+ +----+------+-----------+------+---------+------+-------+------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +----+------+-----------+------+---------+------+-------+------------------+----------+ | 2 | root | localhost | nil | Sleep | 1 | | NULL | 0.000 | | 40 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 | +----+------+-----------+------+---------+------+-------+------------------+----------+ [root@percona-pxc3 nil]#

Conclusion: As we know, some ALTER TABLE can cause problems like server hang or crash. So whenever we are planning to run it on productions servers, it has to be well planned. As MySQL doesn’t provide a “progress bar,” we can use the above explained methods to check progress. If anyone knows of any other method aside from the above then please share in the comments and I would glad to add it in this post.

The post How to monitor ALTER TABLE progress in MySQL appeared first on MySQL Performance Blog.

Categories: MySQL

Quick review of InfiniDB 4

MySQL Performance Blog - Tue, 2014-02-25 11:00

I’ve mentioned InfiniDB before in a previous post titled, “Star Schema Bechmark: InfoBright, InfiniDB and LucidDB,” but it’s been 4 years since that was published. Recently I evaluated column-storage solutions for Percona Cloud Tools and took another look at InfiniDB 4. There was the release of version 4, which I think is worth attention.

What is interesting in InfiniDB 4:

  • Fully OpenSource, GPLv2. There is no reserved features for Enterprise version
  • Multiple CPUs are used even for single query execution
  • WINDOW functions in SQL queries

What is WINDOW functions? In short, in regular SQL for a given row you can’t access to previous and next rows (at least not without complex SELF JOINS) and WINDOW function allows that.

As a basic example:

SELECT start_ts, query_time, LAG(query_time) OVER (ORDER BY start_ts) prev_time FROM qc WHERE agent_id = 1;

So for given row (start_ts) we access not only current query_time, but also a previous query_time: LAG(query_time). So the result looks like:

start_ts | query_time | prev_sum ---------------------+----------------+---------- 2013-06-18 22:55:00 | 0.281992 | 2013-06-18 23:00:00 | 0.292305 | 0.281992 2013-06-18 23:05:00 | 0.215658 | 0.292305 2013-06-18 23:10:00 | 0.243597 | 0.215658 2013-06-18 23:15:00 | 0.35189 | 0.243597 2013-06-18 23:20:00 | 0.543079 | 0.35189 2013-06-18 23:25:00 | 0.495779 | 0.543079 2013-06-18 23:30:00 | 0.589326 | 0.495779 2013-06-18 23:35:00 | 0.210907 | 0.589326 2013-06-18 23:40:00 | 0.167944 | 0.210907 2013-06-18 23:45:00 | 0.202401 | 0.167944 2013-06-18 23:50:00 | 0.171676 | 0.202401 2013-06-18 23:55:00 | 0.234001 | 0.171676 2013-06-19 00:00:00 | 0.214086 | 0.234001

It gives a lot of interesting possibilities, like calculation of a growth or decline for time periods.

I still continue to evaluate different solutions but I decided to share this about InfiniDB, as it looks quite interesting now.

The post Quick review of InfiniDB 4 appeared first on MySQL Performance Blog.

Categories: MySQL

Percona Toolkit 2.2.7 is now available

MySQL Performance Blog - Tue, 2014-02-25 10:30

Percona is glad to announce the release of Percona Toolkit 2.2.7 on February 24th, 2014 (Downloads are available here and from the Percona Software Repositories). This release is the current GA (Generally Available) stable release in the 2.2 series. This release fixes one security bug. An upgrade is recommended.

Fixed bug 1279502: –version-check behaves like spyware

The configuration for what information Percona Toolkit should retrieve is not hardcoded in the scripts. Instead, it is downloaded from v.percona.com. One of the possible parameters was the name of a binary file to execute with “–version.” This was fixed by removing the ability to execute arbitrary binary files with the version parameter. You can find more information about the benefits of “version check” in the launch post for Percona Toolkit 2.2.

All Percona software is free and open-source. Details of the release can be found in the release notes and the 2.2.7 milestone at Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

The post Percona Toolkit 2.2.7 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

The MySQL ARCHIVE storage engine – Alternatives

MySQL Performance Blog - Mon, 2014-02-24 16:01

In my previous post I pointed out that the existing ARCHIVE storage engine in MySQL may not be the one that will satisfy your needs when it comes to effectively storing large and/or old data. But are there any good alternatives? As the primary purpose of this engine is to store rarely accessed data in disk space efficient way, I will focus here on data compression abilities rather then on performance.

The InnoDB engine provides compressed row format, but is it’s efficiency even close to the one from that available in archive engine? You can also compress MyISAM tables by using myisampack tool, but that also means a table will be read only after such operation.

Moreover, I don’t trust MyISAM nor Archive when it comes to data durability. Fortunately along came a quite new (open source since April 2013) player into this field – TokuDB! It seems to provide an excellent compression ratios, but also it’s fully ACID compliant, and does not have any of the limitations present in Archive, so it’s functionality is much more like InnoDB! This may allow you also to store production data on SSD drives, which disk space cost is still higher then on traditional disks, where otherwise it could be too expensive.

To better illustrate what choice do we have, I made some very simple disk savings comparison of all the mentioned variants.
I have used an example table with some scientific data fetched from here (no indexes):

CREATE TABLE `table1` ( `snp_id` int(11) DEFAULT NULL, `contig_acc` varchar(32) DEFAULT NULL, `contig_ver` tinyint(4) DEFAULT NULL, `asn_from` int(11) DEFAULT NULL, `asn_to` int(11) DEFAULT NULL, `locus_id` int(11) DEFAULT NULL, `locus_symbol` varchar(128) DEFAULT NULL, `mrna_acc` varchar(128) DEFAULT NULL, `mrna_ver` int(11) DEFAULT NULL, `protein_acc` varchar(128) DEFAULT NULL, `protein_ver` int(11) DEFAULT NULL, `fxn_class` int(11) DEFAULT NULL, `reading_frame` int(11) DEFAULT NULL, `allele` text, `residue` text, `aa_position` int(11) DEFAULT NULL, `build_id` varchar(4) NOT NULL, `ctg_id` int(11) DEFAULT NULL, `mrna_start` int(11) DEFAULT NULL, `mrna_stop` int(11) DEFAULT NULL, `codon` text, `protRes` char(3) DEFAULT NULL, `contig_gi` int(11) DEFAULT NULL, `mrna_gi` int(11) DEFAULT NULL, `mrna_orien` tinyint(4) DEFAULT NULL, `cp_mrna_ver` int(11) DEFAULT NULL, `cp_mrna_gi` int(11) DEFAULT NULL, `verComp` varchar(7) NOT NULL )

ARCHIVE storage engine

mysql >show table status like 'table1'\G *************************** 1. row *************************** Name: table1 Engine: ARCHIVE Version: 10 Row_format: Compressed Rows: 19829016 Avg_row_length: 11 Data_length: 221158267 Max_data_length: 0 Index_length: 0 Data_free: 0 Auto_increment: NULL Create_time: NULL Update_time: 2013-12-22 23:58:51 Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: Comment: 1 row in set (0.28 sec) -rw-rw----. 1 przemek przemek 211M Dec 22 23:58 table1.ARZ

TokuDB engine, default compression

mysql >show table status like 'table1'\G *************************** 1. row *************************** Name: table1 Engine: TokuDB Version: 10 Row_format: tokudb_zlib Rows: 19829016 Avg_row_length: 127 Data_length: 2518948412 Max_data_length: 9223372036854775807 Index_length: 0 Data_free: 6615040 Auto_increment: NULL Create_time: 2013-12-23 00:03:47 Update_time: 2013-12-23 00:12:14 Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: Comment: 1 row in set (0.13 sec) -rwxrwx--x. 1 przemek przemek 284M Dec 23 00:12 _b_tokudb_table1_main_32_1_18_B_0.tokudb

TokuDB engine, highest compression

mysql [localhost] {msandbox} (b_tokudb) > show table status like 'table1'\G *************************** 1. row *************************** Name: table1 Engine: TokuDB Version: 10 Row_format: tokudb_lzma Rows: 19829016 Avg_row_length: 127 Data_length: 2518948412 Max_data_length: 9223372036854775807 Index_length: 0 Data_free: 6950912 Auto_increment: NULL Create_time: 2013-12-23 00:43:47 Update_time: 2013-12-23 00:49:14 Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: row_format=TOKUDB_LZMA Comment: 1 row in set (0.01 sec) -rwxrwx--x. 1 przemek przemek 208M Dec 23 00:49 _b_tokudb_sql_980_2_main_1b92_2_18.tokudb

(btw, did you notice how the file name changed after altering with different compression?
It’s no longer reflecting the real table name, so quite confusing )

InnoDB engine, uncompressed

mysql > show table status like 'table1'\G *************************** 1. row ***************************            Name: table1          Engine: InnoDB         Version: 10      Row_format: Compact            Rows: 19898159  Avg_row_length: 117     Data_length: 2343567360 Max_data_length: 0    Index_length: 0       Data_free: 4194304  Auto_increment: NULL     Create_time: 2014-01-01 16:47:03     Update_time: NULL      Check_time: NULL       Collation: latin1_swedish_ci        Checksum: NULL  Create_options:         Comment: 1 row in set (0.42 sec) -rw-rw----. 1 przemek przemek 2.3G Jan  1 16:37 table1.ibd

InnoDB engine, compressed with default page size (8kB)

mysql > show table status like 'table1'\G *************************** 1. row ***************************            Name: table1          Engine: InnoDB         Version: 10      Row_format: Compressed            Rows: 19737546  Avg_row_length: 59     Data_length: 1171783680 Max_data_length: 0    Index_length: 0       Data_free: 5767168  Auto_increment: NULL     Create_time: 2014-01-01 18:51:22     Update_time: NULL      Check_time: NULL       Collation: latin1_swedish_ci        Checksum: NULL  Create_options: row_format=COMPRESSED         Comment: 1 row in set (0.31 sec) -rw-rw----. 1 przemek przemek 1.2G Jan  1 18:51 table1.ibd

InnoDB engine, compressed with 4kB page size

mysql > show table status like 'table1'\G *************************** 1. row *************************** Name: table1 Engine: InnoDB Version: 10 Row_format: Compressed Rows: 19724692 Avg_row_length: 30 Data_length: 592445440 Max_data_length: 0 Index_length: 0 Data_free: 3932160 Auto_increment: NULL Create_time: 2014-01-01 19:41:12 Update_time: NULL Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: row_format=COMPRESSED KEY_BLOCK_SIZE=4 Comment: 1 row in set (0.03 sec) -rw-rw----. 1 przemek przemek 584M Jan 1 19:41 table1.ibd

MyISAM engine, uncompressed

mysql > show table status like 'table1'\G *************************** 1. row *************************** Name: table1 Engine: MyISAM Version: 10 Row_format: Dynamic Rows: 19829016 Avg_row_length: 95 Data_length: 1898246492 Max_data_length: 281474976710655 Index_length: 1024 Data_free: 0 Auto_increment: NULL Create_time: 2013-12-23 11:02:28 Update_time: 2013-12-23 11:03:45 Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: Comment: 1 row in set (0.01 sec) -rw-rw----. 1 przemek przemek 1.8G Dec 23 11:03 table1.MYD

MyISAM engine, compressed (myisampack)

mysql > show table status like 'table1'\G *************************** 1. row *************************** Name: table1 Engine: MyISAM Version: 10 Row_format: Compressed Rows: 19829016 Avg_row_length: 42 Data_length: 848098828 Max_data_length: 281474976710655 Index_length: 1024 Data_free: 0 Auto_increment: NULL Create_time: 2013-12-23 11:02:28 Update_time: 2013-12-23 11:03:45 Check_time: NULL Collation: latin1_swedish_ci Checksum: 853535317 Create_options: Comment: 1 row in set (0.00 sec) -rw-rw----. 1 przemek przemek 809M Dec 23 11:03 table1.MYD

Compression summary table

TH P { margin-bottom: 0in; }TD P { margin-bottom: 0in; }P { margin-bottom: 0.08in; }EngineCompressionTable size [MB]InnoDB none 2272InnoDB KEY_BLOCK_SIZE=8 1144InnoDB KEY_BLOCK_SIZE=4 584MyISAM none 1810MyISAM compressed with myisampack 809Archive default 211TokuDB ZLIB 284TokuDB LZMA 208TD P { margin-bottom: 0in; }P { margin-bottom: 0.08in; }

So the clear winner is TokuDB, leaving InnoDB far behind. But this is just one test – the results may be very different for your specific data.

To get even better idea, let’s compare several crucial features available in mentioned storage engines

FeatureArchiveMyISAM (compressed)InnoDBTokuDBDMLonly INSERTsnoyesyesTransactionsnonoyesyesACIDnonoyesyesIndexesnoyesyesyesOnline DDLnonoyes *yes **

* – since version 5.6, with some limitations
** – supports add/drop indexes, add/drop/rename columns and expand int, char, varchar and varbinary data types

Summary

TokuDB seems to be an excellent alternative when it comes to disk space usage efficiency, but this is not the only reason why you should try it perhaps.
You may want to check these articles too:

The post The MySQL ARCHIVE storage engine – Alternatives appeared first on MySQL Performance Blog.

Categories: MySQL

Before every release: A glimpse into Percona XtraDB Cluster CI testing

MySQL Performance Blog - Fri, 2014-02-21 15:46

I spoke last month at linux.conf.au 2014 in Perth, Australia, and one of my sessions focused on the “Continuous Integration (CI) testing of Percona XtraDB Cluster (PXC)” at the Developer,Testing, Release and CI miniconf.

Here is the video of the presentation:

Here is the presentation itself:

Percona XtraDB Cluster before every release: Glimpse into CI testing from Raghavendra PrabhuBelow is a rough transcript of the talk:

This talk covered the continuous integration testing of the Galera cluster; specifically, Percona XtraDB Cluster (PXC), based on Galera, is taken into consideration. Due to the nature of the cluster, existing testing procedures of MySQL cannot be used to fully test it, newer novel methodologies are required and used to uncover bugs.

The QA automation of PXC primarily involves:

a) Jenkins

  • Primarily involves triggering of jobs, starting from VCS (bzr) checkin to build clone culminating in tests and RPM/DEB builds.
  • In some cases, manual trigger is used, whereas in other cases, SCM polling is made use of.
  • Build blocking is also used to enforce implicit job processing dependencies, for instance when galera library needs to be embedded.
  • Parameterized triggers to decrease slow VCS clones, and to pass parameters to subsequent jobs. Build plumbing and fork/join with jobs are also used.

b) Sysbench

  • Here it is used for both benchmarking and testing. Requests are simultaneously dispatched to nodes to uncover latent bugs with synchronous replication, especially with transaction – rollbacks and commits – and with conflicts, this also helps with instrumentation of latency.
  • A history of measurements from previous jobs is maintained for time-series graphing of results. This helps in identifying performance regressions.
  • MTR test suite is re-used for creating instances.

c) Random Query Generator (RQG)

  • This has again proved valuable in PXC QA, combinations testing, in particular, is used to test different combination of options, some of which may not come up in general testing but may be used out there in production by someone.
  • As in sysbench, this also stresses multiple nodes at same time but to a much higher degree. A slightly modified RQG, ported from MariaDB RQG Galera extension (https://mariadb.com/kb/en/rqg-extensions-for-mariadb-features/) is being used. Various kinds of statements and transactions are tested, but most importantly, since they run concurrently, bugs surface much easily. Several MDL and DDL related bugs (with TOI) have been found and successfully fixed with this method.
  • With combinations testing, since the number of combinations can get astronomically large, pruning of infeasible combinations is also done.
  • It has also been extended to collect backtraces when server is hard deadlocked (when Deadlock reporter also fails). This has been quite valuable with bugs where obtaining backtraces has had been vital.

d) SST Testing

  • SST stands for State Snapshot Transfer. This is more of an end-to-end testing, in that this test starts with starting a node, loading it with data, starting another node after SST from first node, making sure the data is consistent (by checksumming). This is done with several different combinations of configurations which also tests the SST mechanism itself while at the same time testing the server with these combinations. So, a success of these tests indicates a cluster will start and work as intended (thus, no blue smoke!).
  • This re-uses PXB test suite with Xtrabackup.
  • Also, serves to test PXC on different platforms (13×2 so far).

e) Replication Testing

  • This was written to test upgrades between major versions, 5.5 and 5.6
  • Intended to test rolling upgrade
  • Re-uses earlier test components – MTR, sysbench, SST – since it involves starting two nodes, upgrading one of them and replication stream between them
  • Overlaps with other tests in coverage

f) Other techniques such as use of lock_wait_timeout (defaulting to one year) to catch MDL bugs, use of release and debug builds differently in tests: with the manifestation of a bug in either (an assertion/crash in debug build being a server hang in release buid for instance) are also used.

g) In future, we intend to have:

  • Testing at a higher level with Chef, Puppet etc., intending
    to test packaging
  • Also, to test distro idiosyncrasies
  • Automated handling of test results with extraction and
    analysis of logs and backtraces 

  • Also, currently, we don’t test for externalities like network jitters (something that can be simulated with netem). Doing this requires moving from (or cloning) node-per-process model to node-per-slave (jenkins slave). This can, for instance, help with debugging of issues associated with evs (extended virtual synchrony) layer of galera.
    • Incidentally, this was also one of the questions after the talk, since a few other *aaS providers tend to bring up separate jenkins slaves for testing, where they test for features associated with virtualization for instance (as in case of OpenStack).

To conclude, as you may notice, there is a certain degree of overlap between these tests. This is intended, so that if one type of test misses it, other catches it, making it easy to detect the hard-to-catch bugs.

The post Before every release: A glimpse into Percona XtraDB Cluster CI testing appeared first on MySQL Performance Blog.

Categories: MySQL

Percona XtraDB Cluster 5.6.15-25.4 is now available

MySQL Performance Blog - Thu, 2014-02-20 11:07


Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on February 20th 2014. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.15-63.0 including all the bug fixes in it, Galera Replicator 3.3 and on Codership wsrep API 5.6.15-25.2 is now the current General Availability release. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.6.15-25.4 milestone at Launchpad.

Bugs fixed

  • Parallel Applying was not functioning which was evident from the wsrep_cert_deps_distance being 1.0 at all times. Bug fixed #1277703.
  • Binlog events were created for the statements for non-InnoDB tables, but they were never cleaned from transaction cache, which could lead to node crash. Bug fixed #1277986.
  • Percona XtraDB Cluster didn’t validate the parameters of wsrep_provider_options when node was being started. Bug fixed #1260193.
  • clustercheck script would mark node as down on Debian based systems if it was run with default values because it was looking for the defaults-extra-file in the wrong directory. Bug fixed #1276076.
  • Deadlock would happen when NULL unique key was inserted. Workaround has been implemented to support NULL keys, by using the md5 sum of full row as key value. Bug fixed #1276424
  • Variables innodb-log-group-home-dir and innodb-data-home-dir are now handled by default (ie., there is no need to set them up in sst_special_dirs). Bug fixed #1276904.
  • Builds now use system Zlib instead of bundled one. Bug fixed #1277928.
  • If transaction size exceeds the wsrep_max_ws_size limit, there will appear a warning message in the error log and replication is skipped. However, the transaction was committed in the master node, and cluster would be in inconsistent state. Bug fixed #1280557.
  • Variable wsrep_load_data_splitting defaults to OFF now, using it turned ON with autocommit set to 0 is not recommended.
  • Other bugs fixed: #1279844.

Release notes for Percona XtraDB Cluster 5.6.15-25.4 are available in our online documentation along with the installation and upgrade instructions. We did our best to eliminate bugs and problems during the testing release, but this is a software, so bugs are expected. If you encounter them, please report them to our bug tracking system.

Percona XtraDB Cluster Errata can be found in our documentation.

The post Percona XtraDB Cluster 5.6.15-25.4 is now available appeared first on MySQL Performance Blog.

Categories: MySQL

You spoke, we listened: Percona Cloud Tools accelerates query reviews

MySQL Performance Blog - Wed, 2014-02-19 15:46

The Percona Cloud Tools beta service continues to grow its query analysis capabilities with new features for reviewing queries. These enhancements will:

  • Provide better visibility into the query performance of your next code deployment
  • Introduce query organization tools for efficiency
  • Foster better collaboration in teams

Join the hundreds of other users who are using Percona’s MySQL expertise and Percona Cloud Tools to improve the performance of their applications. Of course, it’s free during the beta.

What’s newQuery Review

Query Review gives you the right resources to efficiently review your application’s most important database activities. New tools enable you to assess, categorize, and comment on each of your application’s queries.

New, Reviewed, Needs Attention


Investigating a problem? Now when you review your application’s queries, we’ll keep track of queries you’ve evaluated so you don’t have to. All queries can be marked as “New,” “Reviewed,” or “Needs attention.” We mark queries as “New” by default. These three phrases will save you significant time and can help you more efficiently perform query analysis as a team. New filters in Percona Cloud Tools also allow you to quickly assess the performance of new queries introduced in your latest code deployment.

Query Tags


Query tags are a flexible way to allow your team to use your own language to categorize queries. There are many ways to use tags. One approach we recommend is to record which queries belong to different sub-systems of your application such as ‘checkout’ or ‘hotel-search’. This should help to connect the experiences of your application’s users to the underlying queries.

Comments


Comments enable you and your team to better collaborate on improving the performance of your application as a team. Anyone that is a part of your organization can read and create query comments.

Sorts, filters, and search


You spoke and we listened. Finding queries is now easier than ever in Percona Cloud Tools with tools for sorting, filtering, and free text search. Feel free to send other feature suggestions by emailing cloud-tools@percona.com.

Bugs Fixed
  • Extended slow log metrics/verbosity did not work for Percona Server 5.1 or Percona Server 5.5.10 and older because a variable name was changed as of Percona Server 5.5.10.
  • Custom time range selection would be lost if the “Report” page was reloaded.
  • When reporting interval for Query Analytics was set to less than 2 minutes, sometimes the service would stop working.
  • Same server/agent was shown twice in the Performance tab

Thanks to all of the beta testers for using Percona Cloud Tools and providing feedback. Your suggestions and bug reports have been very helpful. Keep ‘em coming!

Still haven’t tried Percona Cloud Tools? Sign up today and you’ll be up and running minutes! Having trouble? We’re here to help. Follow the “Support” link after you sign in and tell us about the problem or join the Percona Cloud Tools conversation at Percona Forums.

The post You spoke, we listened: Percona Cloud Tools accelerates query reviews appeared first on MySQL Performance Blog.

Categories: MySQL

Oracle’s Morgan Tocker opens up about MySQL development, MySQL 5.7

MySQL Performance Blog - Tue, 2014-02-18 08:00

Today’s post features an interview with Morgan Tocker, MySQL community manager at Oracle. Morgan is an old friend of Percona, having worked here as director of MySQL training from 2009 to 2011. He’s also done stints at MySQL, Sun Microsystems and InPowered. You can follow his adventures at his blog, “Master MySQL.”  You can also connect with him face-to-face at the Percona Live MySQL Conference and Expo this April 1-4 in Santa Clara, Calif.  Use the promotional code “SeeMeSpeak” on the registration page and save 10 percent.

*  *  *

Oracle’s Morgan Tocker

The Toronto-based Morgan is gearing up for his Percona Live session, “MySQL 5.7: Core Server Improvements.” For MySQL 5.7, he explained, one of the engineering goals is to continue to clean up and simplify code and improve the architecture of the MySQL Server. As part of this spring-cleaning process, he said some features in MySQL 5.7 have a change in behavior. For example, the EXPLAIN PARTITIONS and EXPLAIN EXTENDED syntax will be enabled by default. Other features will be deprecated and may be removed – such as the InnoDB Monitor tables.

Morgan said his session aims to describe the motivations behind each of these changes and how they will affect you as a MySQL DBA. He’ll also describe how some of the work will lift historical architectural limitations and help build a better foundation for future innovation.

I had to opportunity to speak with him via Skype the other day. Here’s what he had to say.

Tom: Would you say that Oracle is a worthy custodian of MySQL?

Morgan: When you phrase it like that you sort of open the door for one to nitpick. I certainly don’t think that Oracle has done any worse than anyone else to date.  The quality of releases has improved dramatically, and I am very happy with the choice of features that have been implemented.  There has been a huge focus on performance, diagnostics and online operations, which I consider very important.


Tom:
How big is MySQL engineering team there at Oracle?

Morgan: The official number is “hundreds” – our engineering team is now twice the size that it was at the time of the Sun acquisition.  The QA team also tripled in size, and we’re still hiring for a number of positions.


Tom:
How does Oracle handle MySQL development?

Morgan: The short version is that we release new features as part of Development Milestone Releases (or so called DMRs) up to 4 times per year, with a major version every 2-3 years.

The DMRs are designed to be of release candidate quality, and any new features have undergone rigorous testing and qualification internally.  My colleague Anitha Gopi has written about the process before:

* 80%+ code coverage in tests.
* Complete functional and non functional test coverage of changed and new functionality
* No regressions

MySQL 5.7 is currently at DMR3 (December 2013), and some of the change log entries for DMR4 are already showing up on dev.mysql.com.


Tom:
Why do you wait two to three years?

Morgan: Our customers have told us that they do not all want to reintegrate with a new release 4-times-per-year, and the feedback we’ve been receiving is that every 2-3 years is “about right”.  But at the same time, using this agile style of development internally is important to be able to have an opportunity to receive feedback on what is in the pipeline for the next major version.

You are allowed to assume that these development milestone releases are release-candidate quality, and unofficially some people do run them in production. In fact I run my blog on MySQL 5.7.


Tom:
Really? How is MySQL 5.7 working for you, personally?

Morgan: It’s working pretty well. I’m using it because I wanted to test WordPress to ensure that, eating the dog food, we wouldn’t be introducing any problems that would be felt downstream. And once I find the right project I’ll also want to do the same thing with Drupal to make sure I understand what their problems and needs are, too.

I used to power my blog with Jekyll, which is a static site generator, but I’ve since switched to WordPress last month and I’ve been very happy.


Tom:
Have you seen anything notable or surprising performance-wise?

Morgan: No, not in performance but I did run it in Strict mode, which is one of the new defaults we’re proposing for MySQL 5.7. I’m not using many plug-ins but it’s actually working.


Tom:
What’s “Strict Mode”?

Morgan: MySQL has traditionally been a little relaxed in that it will tolerate incorrect data.  That is to say that if you insert 15 characters into a CHAR(10), then it will insert the first 10, present you with a warning and then throw away the remaining five.

We want to change that behavior so now it will just refuse that insert and say: “Sorry you tried to insert 15 characters into a column that will only accept 10.”  This is something that a developer will usually want to look at because it could mean a possible bug in an application.  Some of our users have even uncovered SQL injection attempts (security issues) by moving to Strict Mode.


Tom:
Calvin Sun of Twitter said in my last post a couple weeks ago that he was looking forward to having some visibility and perhaps even some influence into Oracle’s MySQL roadmap in terms of upcoming releases. Will he be disappointed?

Morgan: No, I don’t think he’ll be disappointed. What Calvin is looking for is absolutely one of the goals of our DMR releases.  We want people to take a look, and truly appreciate any feedback that we receive.

To add to that – one of our goals in the Community team is to communicate some of the changes we have planned for 5.7, and collate the feedback and pass it back to the development team.   Having previously done this via both blogging and events such as FOSDEM, I can tell you that collecting feedback in person is always better.  So I’ll be looking forward to catching up with Calvin and seeing how we can work together.


Tom:
The last time you presented at Percona Live was 2009 – when it was officially called the “Percona Performance Conference” and was held in parallel with the MySQL Conference & Expo back when it was co-sponsored by O’Reilly Media and Sun. Who should make it a point of attending you session at this year’s conference?

Morgan: My session is titled “MySQL 5.7: Core Server Improvements” and it will be relevant for DBAs and also people writing MySQL code. Some of what I’ll be sharing in my session is that Oracle has a long-term outlook, and isn’t afraid to do the heavy lifting required to improve the architecture of the MySQL Server.

What makes me really proud of this talk, is that I get to demonstrate that although we occupy a space with a lot of competition, Oracle is not trying to get ahead short-term by implementing flashy new features.  There is a strong focus on making a better core product better, which is often a much more resource intensive process.

The part that is relevant to the audience is as we transition into this new and improved architecture, some functionality may change. Like with the Strict Mode example, there will be cases where we want to make sure we communicate how and why the change is happening, and I’ll be collecting feedback as to how we can make any changes on our side to make the process smoother.

There’s also an entire category of my blog soliciting feedback.


Tom:
What sort of feedback have you been getting?

Morgan: Some of it has been absolutely amazing. We don’t get into specifics about unreleased features, but I can definitely say that we have taken the feedback onboard and it has changed some of our initial plans. I’m hoping to share more in the future.


Tom:
What other Percona Live sessions are you interested in attending? What topics in particular?

Morgan:  Definitely MySQL Fabric. Mats Kindahl will be leading a session titled “Sharding and Scale-out using MySQL Fabric” and I’m looking forward to that one very much.

We announced that we were working on MySQL Fabric at MySQL Connect in September 2013 with a MySQL labs release.  Labs is our way of releasing experiments to users earlier (and doesn’t follow the same rules as DMRs).  I think that this was a great way to start the feedback cycle early, and there will be a lot of discussions about Fabric at Percona Live.

I’m also looking forward to attending some of the customer sessions in April – seeing how people have used our products along with which limitations they’ve found or features they’d like to see added.


Tom:
Getting user feedback is a big part of your job. Does any of it ever get under your skin?

Morgan: Not usually, but sometimes I talk with people who say they’ve found bugs but they haven’t filed them. When that happens I try and find a subtle-but-polite way to nudge them into doing that.

We really appreciate it when someone files a bug or a good test case. In fact, I’ve starting writing “Thank You’s” to those people on my blog. Here’s an example from the MySQL 5.6.16 community release notes.( http://www.tocker.ca/2014/02/10/mysql-5-6-16-community-release-notes.html )


Tom:
(Clicks the URL and reads through the list). I see several names I recognize….

Morgan: Yes, I do share the names but it is difficult to accurately track the company for attribution, since not every reporter will use their company email address.

After Oracle, in 5.6.15 and 5.6.16 two of the largest MySQL bug contributors were Facebook and Percona.


Tom:
Morgan, thanks very much for talking with me today. I look forward to meeting you in April at Percona Live!

Morgan: It was my pleasure.  Can’t wait until April!

The post Oracle’s Morgan Tocker opens up about MySQL development, MySQL 5.7 appeared first on MySQL Performance Blog.

Categories: MySQL
Syndicate content